Saturday, December 29, 2012

Resilient Systems - Survivability of Software Systems

Resilience as we all know is an ability to withstand through tough times.There is also another term quite interchangeably used, which is Reliability. But Reliability and Resilience are different. Reliability is about a system or a process that has zero tolerance to failure or the one that should not fail. In other words, when we talk about reliable systems, the context is that failure is not expected or rather acceptable. Whereas Resilience is about the ability to recover from failures. What is important to understand about resilience is that failure is expected and is inherent in any systems or processes, which might be triggered due to changes to the platform, environment and data. While Reliability is about the system’s robustness of not failing, Resilience is its ability to sense or detect failures ahead and then prevent it from encountering such events that lead to failure and when it cannot be avoided, allow it to happen and then recover from the failure sooner.


A working definition for resilience (of a system) developed by the Resilient Systems Working Group (RSWG) is as follows:

“Resilience is the capability of a system with specific characteristics before, during and after a disruption to absorb the disruption, recover to an acceptable level of performance, and sustain that level for an acceptable period of time.“ The following words were clarified:
  • The term capability is preferred over capacity since capacity has a specific meaning in the design principles.
  • The term system is limited to human-made systems containing software, hardware, humans, concepts, and processes. Infrastructures are also systems.
  • The term sustain allows determination of long-term performance to be stated.
  • Characteristics can be static features, such as redundancy, or dynamic features, such as corrective action to be specified.
  • Before, during and after – Allows the three phases of disruption to be considered.
    • Before – Allows anticipation and corrective action to be considered
    • During – How the system survives the impact of the disruption
    • After – How the system recovers from the disruption
  • Disruption is the initiating event of a reduction is performance. A disruption may be either a sudden or sustained event. A disruption may either be internal (human or software error) or external (earthquake, tsunami, hurricane, or terrorist attack).

Evan Marcus, and Hal Stern in their book Blueprints for High Availability, define a resilient system as one that can take a hit to a critical component and recover and come back for more in a known, bounded, and generally acceptable period of time.


In general Resilience is a term of concern for Information Security professionals as the final impact of disruption (from which a system needs to recover), could mostly be on Availability which is one of the three tenets of Information Security (CIA - Confidentiality, Integrity and Availability). But there is a lot for System designers and developers, especially those tasked to build mission critical systems to be concerned about Resilience and architect the systems to build in a required level of Resilience characteristics. For a system to be resilient, it should draw necessary support from dependent software and hardware components, systems and the platform. For instance a disruption for a web application can even be from network outage, security attacks at the network layer, which the software has no control over. But it is important to consider software resiliency in relation to the resiliency of the entire system, including the human and operational components.


The PDR (Protect - Detect - React) strategy is no longer as effective as it used to be due to various factors. It is time that predictive analytics and some of the disruptive technologies like big data and machine learning need a consideration in enhancing the system resiliency. Based on the logs of various inter-connected applications or components and other traffic data on the network, intelligence need to be built into the system to a combination of number of possible actions. For instance, if there is a reason to suspect an attacker attempting to gain access to the systems, a possible action could be to operate the system at a reduced access mode, i.e. parts of the systems may be shut down or parts of the networks to which the system is exposed could be blocked, etc.


OWASP’s AppSensor project is worth checking by the architects and developers. The AppSensor project defines a conceptual framework and methodology that offers prescriptive guidance to implement intrusion detection and automated response into an existing application. AppSensor defines over 50 different detection points which can be used to identify a malicious attacker.Appsensor provides guidance in the form of possible responses for each such identified malicious atacer.


The following are some of the factors that need to be appropriately addressed to enhance the resilience of the systems:

Complexity - With systems continuously evolving and many discrete systems and components increasingly integrated into today’s IT eco-system, the complexity is on the rise. This makes the resiliency routines as built in to individual systems needing a constant review and revision.

Cloud - Cloud computing is gaining higher acceptance and as organizations embrace cloud for its IT needs, the location of data, systems and components are widespread across the globe and that brings in challenge for those involved in building resilience.

Agility - To stay on top of the competition, organizations need agility in their business processes, which means rapid changes in the underlying systems and this could be a challenge as this will call for a constant check to ensure that the changes being introduced does not downgrade or compromise the resiliency level of the systems.


While there are techniques and guiding principles which when followed and applied, the resilience of the systems can be greatly improved, such design or implementation comes with a price and that is where the economics of Resiliency needs to be considered. For instance, mission critical software systems like the ones used in medical devices, need to have a high resilience characteristic, but quite many of the business systems can have a higher tolerance level and thereby being less resilient. However, it is good to document the expected resilience level at the initial stage and work on it in the early life cycle of the system development. thinking about resilience later in the life cycle may not be any good as implementation will call for higher investment.



References:

Crosstalk - The journal of Defense Software Engineering Vol 22 No:6

Resilient Systems Working Group

OWASP - AppSensor Project

Saturday, December 15, 2012

Effective vs Ineffective Security Governance

Continuing with my earlier blog on Measuring the Performance of EA, I was looking for methods and measures that can be used for measuring the effectiveness of the security program in an enterprise. I happened to read a CERT article titled as Characteristics of Effective Security Governance which contains a good comparision of what is effective and what is ineffective. I have reproduced it here in this blog for a quick reference. The original article of CERT though out dated is worth reading.

EffectiveIneffective or Absent
Board members understand that information security is critical to the organization and demand to be updated quarterly on security performance and breaches.

The board establishes a board risk committee (BRC) that understands security’s role in achieving compliance with applicable laws and regulations, and in mitigating organization risk.

The BRC conducts regular reviews of the ESP.

The board’s audit committee (BAC) ensures that annual internal and external audits of the security program are conducted and reported.
Board members do not understand that information security is in their realm of responsibility, and focus solely on corporate governance and profits.

Security is addressed adhoc, if at all.

Reviews are conducted following a major incident, if at all.

The BAC defers to internal and external auditors on the need for reviews. There is no audit plan to guide this selection.
The BRC and executive management team set an acceptable risk level. This is based on comprehensive and periodic risk assessments that take into account reasonably foreseeable internal and external security risks and magnitude of harm.

The resulting risk management plan is aligned with the entity’s strategic goals, forming the basis for the company's security policies and program.
The CISO locates boilerplate security policies, inserts the organization's name, and has the CEO sign them.

If a documented security plan exists, it does not map to the organization’s risk management or strategic plan, and does not capture security requirements for systems and other digital assets.
A cross-organizational security team comprised of senior management, general counsel, CFO, CIO, CSO and/or CRO, CPO, HR, internal communication/public relations, and procurement personnel meet regularly to discuss the effectiveness of the security program, new issues, and to coordinate the resolution of problems.CEO, CFO, general counsel, HR, procurement personnel, and business unit managers view information security as the responsibility of the CIO, CISO, and IT department and do not get involved.

The CSO handles physical and personnel security and rarely interacts with the CISO.
The general counsel rarely communicates particular compliance requirements or contractual security provisions to managers and technical staff, or communicates on an ad-hoc basis.
The CSO/CRO reports to the COO or CEO of the organization with a clear delineation of responsibilities and rights separate from the CIO.

Operational policies and procedures enforce segregation of duties (SOD) and provide checks and balances and audit trails against abuses.
The CISO reports to the CIO. The CISO is responsible for all activities associated with system and information ownership.

The CRO does not interact with the CISO or consider security to be a key risk for the organization.
Risks (including security) inherent at critical steps and decision points throughout business processes are documented and regularly reviewed.

Executive management holds business leaders responsible for carrying out risk management activities (including security) for their specific business units.

Business leaders accept the risks for their systems and authorize or deny their operation.
All security activity takes place within the security department, thus security works within a silo and is not integrated throughout the organization.

Business leaders are not aware of the risks associated with their systems or take no responsibility for their security.
Critical systems and digital assets are documented and have designated owners and defined security requirements.Systems and digital assets are not documented and not analyzed for potential security risks that can affect operations, productivity, and profitability. System and asset ownership are not clearly established.
There are documented policies and procedures for change management at both the operational and technical levels, with appropriate segregation of duties.

There is zero tolerance6 for unauthorized changes with identified consequences if these are intentional.
The change management process is absent or ineffective. It is not documented or controlled.

The CIO (instead of the CISO) ensures that all necessary changes are made to security controls. In effect, SOD is absent.
Employees are held accountable for complying with security policies and procedures. This includes reporting any malicious security breaches, intentional compromises, or suspected internal violations of policies and procedures.Policies and procedures are developed but no enforcement or accountability practices are envisioned or deployed. Monitoring of employees and checks on controls are not routinely performed.
The ESP implements sound, proven security practices and standards necessary to support business operations.No or minimal security standards and sound practices are implemented. Using these is not viewed as a business imperative.
Security products, tools, managed services, and consultants are purchased and deployed in a consistent and informed manner, using an established, documented process.

They are periodically reviewed to ensure they continue to meet security requirements and are cost effective.
Security products, tools, managed services, and consultants are purchased and deployed without any real research or performance metrics to be able to determine their ROI or effectiveness.

The organization has a false sense of security because it is using products, tools, managed services, and consultants.
The organization reviews its enterprise security program, security processes, and security’s role in business processes.

The goal of the ESP is continuous improvement.
The organization does not have an enterprise security program and does not analyze its security processes for improvement.

The organization addresses security in an ad-hoc fashion, responding to the latest threat or attack, often repeating the same mistakes.
Independent audits are conducted by the BAC. Independent reviews are conducted by the BRC. Results are discussed with leaders and the Board. Corrective actions are taken in a timely manner, and reviewed.Audits and reviews are conducted after major security incidents, if at all.


The article also lists eleven characteristics of effective security governance in addition to listing the Ten challenges to implementing an effective security governance. I would highly recommend you to read the full article.


References:
CERT’s resources on Governing for Enterprise Security


CERT and CERT Coordination Center are registered in the U.S. Patent and Trademark Office by Carnegie Mellon University

Thursday, December 13, 2012

Implementing IT Balanced Scorecard

Source: ISACA, Board Briefing on IT Governance 2nd edition

With IT increasingly becoming an enabler of business, more and more organizations are looking for effective and efficient management of IT, so that the investment in IT fetches optimum value. On the same lines, the need for better IT Governance is being felt by the Board of increasing number of organizations. One of the key domain of IT Governance is Performance Measurement. Going by "what is not measured cannot be managed", there need to be plans and processes in place for measuring the performance of IT so that it can be better governed.

Much of the value returned by IT are intangible. While it is easy to measure the tangible benefits, measuring intangible benefits is difficult. Business Scorecard (BSC) which evolved in the early 1990s has evolved into an very useful tool for measuring both tangible and intangible benefits segmented into four perspectives - Financial, Customer, Internal Process and Learning. IT BSC as derived from the Business Scorecards were found to be a a very effective measurement system addressing the concerns of reporting the intangible benefits to the Board.
The Balanced Scorecard as it has evolved over a period of time is being looked at not just as a performance measurement tool, but as a strategic planning and management system. This is because, the Balanced Scorecards can be cascaded down smaller business units including IT and aggregated upwards to the higher-level. IT BSC, which is cascaded from the Business Scorecard can be further subdivided into one for each of the technology domains, for instance one for managing the IT Operations and another to manage the IT Development areas. While doing so, it is important to maintain the linkages between each such cascaded Scorecards and this way the Balanced Scorecard can facilitate Strategy Mapping, thereby improving the Alignment of the objectives of the smaller business and IT units into the business strategy.


The perspective of the IT BSC may be redefined to better represent the IT organization. For instance, the following four perspectives may be used in IT BSC:

  • Corporate Contribution - Equivalent to the Finance perspective of the Balanced Scorecard, this represents the view of business executives on the IT department. 
  • Customer Orientation - Equivalent to the Customer perspective of the Balanced Scorecard, this represents the view of the end users on the IT department. 
  • Operational Excellence - Equivalent to Process perspective of the Balanced Scorecard, this represents the effectiveness and efficiency of various standards, processes and policies followed by the IT department. 
  • Future Orientation - Equivalent to Learning and Growth perspective of the Balanced scorecard, this represent a view of how well IT is prepared to meet the future needs of the business.

To be effective, the following three principles need to be built into the balanced scorecards:

  • Cause-and-effect relationships - the identified performance measures have a cause and effect relationships amongst them, for instance a measure on Improved developer skills (Future Orientation perspective) as a cause will result in improved quality in the applications delivered(Operational Excellence perspective), which in turn should contribute for user statisfaction (User Orientation perspective) 
  • Sufficient performance drivers - While it is common to measure all the possible outcomes (measuring what you have done), it is also important to identify and include suufficient performance drivers(how you are doing). A good mix of both outcome measures and performance drivers are essential for the Scorecard to be effective. 
  • Linkage to financial measures - IT Scorecard, being cascaded from the Enterprise Business Scorecard, the measures in the IT Scorecard should link up to a corresponding measure in the top-level business scorecard. 

To have the Balanced Scorecard implemented as part of the IT Governance initiative, the following steps are recommended:

  • Obtain commitment - Make a presentation to the board and executives explaining the concepts, benefits and cost of implementing it and get a commitment to go ahead. 
  • Kick-off - Kick off the Balanced Scorecard initiative as a project and as part of this activity, train the staff and identify the project team members. 
  • Strategy map - Get an understanding of the corporate business strategy and the sub unit level strategies and then establish a strategy map. 
  • Metrics selection - Understand the existing metrics if any and identify the required metrics, which should be a good mix of both outcome measures and performance drivers 
  • Metrics definition - With respect to each identified metric, create a standard definition, related processes to collect and manage the data. As part of this, the cause and effect relationships should also be clarified and the linkage with higher level scorecards should also be established. 
  • Assign ownership - Assign owners for each metric. 
  • Define Targets - With respect to each metric, set targets (may be a range) for the function heads to achieve and devise strategic initiatives to achieve these targets. 
  • Act on the results - Have the appropriate executive management or board as may be required to review the resulting measures and then act on the results. 
  • Review periodically - The metric definitions, the linkages and the cause-effect relationships may require revision based on experience and this achieved through periodic reviews. 

Successful execution of strategy requires the successful alignment of four components: the strategy, the organization, the employees and the management systems. As Kaplan and Norton put it, “Strategy execution is not a matter of luck. It is the result of conscious attention, combining both leadership and management processes to describe and measure the strategy, to align internal and external organizational units with the strategy, to align employees with the strategy through intrinsic and extrinsic motivation and targeted competency development programs and finally, to align existing management processes, reports and review meetings, with the execution, monitoring and adapting of the strategy.”

Sunday, December 9, 2012

Measuring Performance of Enterprise Architecture

Enterprise Architecture is viewed as an important function in IT Governance and it plays a vital role in aligning IT with the Business. This function is expected to define the technical direction and to ensure application of principles of Architecture to the design and maintenance of IT systems, which in turn should be in alignment with the business vision, mission and strategies and the role that the IT within the organization is expected to play. While the support for commitment and funding is important, it is also important that the EA function consider the following(not exhaustive) to be successful:

  • Alignment with the business strategy and the culture of the organization, 
  • Actively involve in projects to ensure that the principles of design and evolution are adhered to and ensure the continued focus on the business requirements.
  • Offer technical consultancy for all the business and IT functions both internal and external. 
  • Acting as a gate for all decisions impacting the design & evolution architecture. 

IT Governance views EA as the hub of the IT wheel with linkages with various processes, components and goals of the enterprise and some of such key and enabling links are:

  • Promoting and enabling Business Agility
  • Providing standards, policies and principles for the IT Project, Program and Portfolio management function
  • Guides and enables cost management and consolidation
  • Facilitates cost-effective, scalable integration of various IT systems
  • Supports IT Governance by defining / providing the conceptual and technical priorities and thereby promoting informed decision making

Going by the premise, what is not measured does not get managed, it is important to identify the measurable objectives for the Architecture function itself, so that it is well managed and its contribution to the success of the organization is established. While measurement of Architectural activities is difficult, COBIT suggests a set of measurable outcomes and performance measures, some of which are the following:

Number of technology solutions that are not aligned with the business strategy - One of the objectives of the EA function should be to ensure that the technology solutions chosen or implemented are aligned to the business strategy a measure around this could be very useful to establish that the number of misaligned solutions are on the decline. There could be other measures derived around this and could be represented as a relative measure to the total solutions.

Percent of non-compliant technology projects and platforms planned - With the fast changing business environment, there will be times when the business will need to solutions technology projects that are not compliant with the standards and principles laid out by EA. EA has the responsibility to carefully review such needs and grant waivers. Such waivers should be for a shorter term and should be backed with a plan to normalize it. At times, this could call revision in the standards, policies or principles. A measure around this could be very useful that the EA is effective in dealing with non-compliant technology projects and platforms.

Decreased number of technology platforms to maintain - Standardization is one of the objectives of EA, which could contribute to cost reductions and reduce the technical complexity. Statistics and surveys show that enterprises without an active IT Governance / EA function have more multiple applications requiring different platforms for the same business requirement being used by different departments. With an effective EA function, these should be very less in number and should decline over a period. A measure around this is a very good indicator of EA being effective in this area.

Reduced application deployment effort and time-to-market - Supporting Business agility is yet another key objective of the EA function. Today’s businesses are operating in a highly dynamic industry environment and in order to stay competitive and to sustain its market position, need support from IT to have the new or changed capabilities with a reduced time-to-market. A delayed delivery from IT could mean an opportunity lost. A measure around this indicator would really be helpful in establishing how IT is supporting the business changes.

Increased interoperability between systems and applications - It is quite common that most enterprises have multiple applications for specific needs, but there is a need to have these applications share data and information amongst each other. With cloud computing gaining wider acceptance, most enterprises are looking at discrete cloud based solutions. With the benefits outweighing the concerns and constraints, and that the industry is working towards addressing these concerns, there will be increased focus on move to cloud. This will mean hosted applications from various providers would need to be interoperable and working with other in house applications. EA should ensure that the technology and solutions acquired or designed should support this important attribute i.e. interoperability. A measure around this parameter would be an important indicator of EA function’s effectiveness.

Percent of IT budget assigned to technology infrastructure and research - Yet another expectation from the EA function is that it should help businesses in leveraging emerging technology to its advantage. This will require the Architects to be continuously looking for newer technologies and its application areas, and recommend such technology or solutions for implementation so that the business will get the most out of IT to accomplish its mission. It is also important the extent of this activity should be in line with the identified and stated role of IT in the organization. While the percentage of IT budget used for research is an useful measure, there could be other useful measures derived from this, for instance, number of research solutions getting implemented as a percentage to total number of solutions implemented in a given period. Business satisfaction on timely identification and analysis of technology opportunities is another related measure which is indicative of the outcome of this research.

Number of months since the last technology infrastructure review - With the fast changing IT space, it is important to ensure that the technology infrastructure is of continued relevance to meeting the business objectives and if needed changes should be considered. A measure indicating that the EA function is performing review of the technology infrastructure periodically is a good indicator of its effectiveness. Measures derived around this review could be based on the outcome of the review will also be very useful.

There is no one size fits all in IT and as such the measures indicated above need to be tailored to suit the organization and the role of IT within the organization.

References:

COBIT - an IT Governance framework from ISACA.

Developing a successful governance strategy - A best practice guide for decision makers in IT from The National Computing Center, UK

Monday, November 12, 2012

PPM - Project Categorization

PPM is about ‘doing the right things right’ where things refer to work efforts, projects and programs, doing the right things refer to prioritizing and selecting projects and programs in line with the strategy & objectives of the organization and doing things right refers to execution of projects and programs in such a way the benefits are realized.

Portfolios, programs and projects are all formal expressions of resource assignment decisions. Accordingly, a systematic approach to collecting, selecting, planning and managing them is key to a high-quality Governance process. Projects, programs and portfolios share a common life cycle formed around four key stage gates, which are Create, Select, Plan and Manage. In the context of portfolio, an important and first task is to determine the investment types or categories with which the project or program are categorized. The purpose of this blog is to examine the need and methods of project categorization.

Purpose of Categorization

Broadly, the following are the purposes for which organizations find categorization useful.

Strategic alignment: Certain projects share common characteristics that will mean a common approach with respect to prioritization, project tracking and monitoring, strategic visibility, etc. Appropriate categorization of projects would offer better visibility in terms of strategy execution and benefits realization.

Capability specialization: Another reason, organizations would be interested in categorizing projects is to identify and group similar projects, so that they all share common tools, technology and terminologies, which will help better management of resources and communication across projects.

Promote project approach: Though this purpose is of minor nature, it helps organization to promote the project culture and differentiate operations from projects and also to use a common methodology to manage such projects

The following table presents a mind map of organizational purposes around categorization:

Primary Purpose
Sub Purpose
Benefits
Strategic Alignment
Selecting / prioritizing or projects / programs
Alignment commitments with capabilities
Managing Risk / controlling exposure
Allocating budget
Balancing portfolio
Identifying approval process
Planning, Tracking adn Reporting of
Resource usage
Performance, Results, Value
Investments
Comparability across projects, divisions, organizations
Creating strategic visibility
Visibility across projects, divisions, organizations
Capability Specialization
Capability Alignment
Choosing Risk Mitigation Strategy
Choosing Contract Type
Choosing Project Organization Structure
Choosing Methods and Tools
Matching of Skill sets to Projects
Allocating Project to Organizational unit
Setting Price
Enhancing Credibility with Clients
Capability Development
Developing Methods and Tools
Managing Knowledge
Developing Human Resources
Adapting to Market / Customer / Client
Promoting a Project Approach
Providing a Common Language
Facilitate better Communication
Distinguishing Projects from Operations
To Better Manage the Work Efforts


Categorization Schemes

None of the standards or frameworks specify a particular scheme with which to categorize and it is left to the organizations to decide on the scheme that best suit their needs. It has been found that most of the organizations use a multi-dimensional composite attribute based categorization schemes.

Here again the following three broad approaches are in wide use:

Hierarchical scheme: In this scheme, projects are hierarchically grouped based on multiple attributes. For instance, at the top level, the projects may be categorized as small , medium and large, based on the estimated investments and then further categorized into the application areas like, Infrastructure, Enterprise Applications, Field Applications, etc. There could be further categorizations as well. Under this scheme, each project falls under one unique category under each level.

Parallel scheme: Unlike in the hierarchical scheme, in this case, projects are assigned several sets of attributes. For instance, the projects may be categorized by complexity, technology and strategic importance and a project may find a place in all three categories.

Composite scheme: In this scheme, the categories are determined based on the result of applying more than one attribute. For instance, the category project management complexity may be determined by combining multiple attributes like team size, number of units or modules to be delivered, development efforts, duration, etc. Similarly a category deployment complexity may be based on attributes like process impact, end user scope of impact, project profile, project motivation, etc.

Conclusion

There is no single scheme or approach that best suits an organization. It is for the people involved in the IT / Business Governance to carefully choose the categorization based on the organization’s Vision and Strategy and then create a process to consistently determine the appropriate category for a project so that the relevant strategic and tactical tools and methodologies can be applied to that project as well, which in turn will ensure realization of the intended benefits with a desired level of efficiency and effectiveness.

References:


Project Portfolio Management: Doing the Right Things Right


Thursday, November 1, 2012

Windows 8 - My initial experience

As you all know, Microsoft has been betting big on Windows 8 and much have been talked about it in the recent times. I wanted to try it out myself and the attractive upgrade offer from Microsoft tempted me further and I just went ahead for the upgrade.

I decided to try it then on my office Laptop, which is DELL XPS 13, which eventually is my own as I have bought it from my organization. I know that Microsoft has made the upgrade easy using their Upgrade Assistant. I just googled for it and ran it . It did not took much time to scan the hardware and all the installed application to come up with a compatibility report. Among few others, the components that are not compatible were the Cisco VPN client and the Touchpad driver. While the I would not be using the VPN client any more, thought I could find a driver update from DELL and as such went ahead the upgrade by ordering it online.

The 2 GB download took under two hours, and after downloading, it prompted me whether to create a disk or to save it for later install, but I just chose to run it then as I did not had any data to be backed up and I was fine to lose everything on the Laptop. The installation did not took much time and there were couple of restarts in between and I was greeted with the Windows 8 screen prompting me to login. Logged in, I saw the Start screen, much like the Windows Phone, with tiled shortcuts for the Windows 8 Applications.

Amongst the apps on the Start screen is the short cut for the Desktop, which is similar to the typical Windows 7 desktop screen but without the Start button used to explore the programs.During Login, Windows 8 did greeted me that there are certain hidden shortcuts on the corners, which I did not pay much attention though, but figured it myself later. Hover your mouse on the lower left of the screen, you will see the shortcut to switch to the start screen / desktop. When you take the mouse cursor to the top right, you will find the previously used applications be it Windows 8 Apps or Desktop Apps. The Alt + Tab also works for switching amongst the open applications. The lower right corner hides a context menu, which brings up settings, search options, the operation of which is dependent on the actively running application on the foreground.

I experienced an inconsistent response from my external wireless mouse, but did some research to find that it was a problem with the driver for the USB 3.0 hub which I was using to plug in the wireless USB dongle. The DELL XPS 13, being an ultrabook, it has only two USB ports, and did not had Ethernet port. That’s why I had to use a USB hub to have enough USB ports. I could find a driver update for the USB hub, which was from Fresco logic and it worked well after the update.

I then looked up the DELL support website for available driver updates for Windows 8 and there were quite many updates for the Touchpad, Wi-Fi, WIDI (Wireless Display) and I downloaded all and updated them. With this, the OS and Hardware worked fine.

Chrome and Firefox worked as before, but the desktop version of IE did not work well. But the new IE part of the Windows 8 Apps worked very well, but the UI is far different and it needs time to get used it. It is more like the IE on the smart phones. No Menu bar / tool bar, etc. I still find better working with the Desktop version of the browsers as I have been used to it. But also making attempts to use the other browser time and again to familiarise myself with it.

It took a while for me to figure out how to shut down my Laptop, as there isn’t a start button and the Windows 8 start screen also does not has an option. But later figured it out that it is hiding in the Settings context menu, where you will find a power icon, which then leads to options for restart, sleep or shut down. Later I also figured it out that the Alt + F4 on the desktop brings up the usual shutdown screen. Alt + F4 also works for closing the active applications. The Windows 8 Apps do not have a title bar and there are no close buttons or menus. Alt + F4 seem to be the only way to close such applications.

Another issue I ran into is that I had lose my corporate domain user account and login using my own local account. Switching to a different local account was very easy, but I did not get to see the apps / configurations that I did as the corporate domain user. Though this is the expected behavior with any version of Windows OS, I need to explore much to figure out how to get these working for multiple users.

The native mail application allows you to set up your google mail, exchange email or even any other email accounts. The mail, calendar and the people applications are all integrated and it allows you to setup sync the contacts in facebook, linkedin, twitter and google to the people.

Though the UI and the OS is good and usable, it is far different from the earlier versions of Windows and one may have to spend considerable time in exploring, learning and getting used to this all new Windows 8 OS.

Though little confusing as there are considerable changes in the UI it is very much usable. There is still so much for me to explore and I will post one more blog on this topic, covering more of my experiences with Windows 8.

Sunday, October 21, 2012

Top 4 Principles for IT Leaders to focus on

Experts predict that IT Leadership is taking a hit as the business is not happy with the value that IT delivers. The emergence of Cloud and SaaS based Applications have made the business leaders to think that they can get the needed IT support as services, though they are unaware of the issues or challenges with that idea. But this has certainly made the IT leaders to think and do a self-assessment in terms the focus area and the value delivery. Here are the four principles that may help IT leaders to continue delivering value to the business and thereby ensuring their very existence.


Embrace the Business Change

In today’s competitive world, Businesses need to revisit their vision, mission and strategies too often than they did in the past. This most of the times will call for change to the people, process and technology and depending on the priorities, such change may have to happen too soon. IT traditionally has been resisting changes, though with Agile and other approaches, Changes are welcome, but due to various other factors like the maintainability of the systems, cost of change etc, the IT is finding it a challenge to embrace such changes. This is why business leaders are trying to explore options to minimize their dependence on their own IT, so that they can move on with the desired changes quicker and reap the benefits of the change.

For IT leaders, embracing change is a challenge as most of them are still living with legacy systems which have very poor characteristics in terms of scalability and maintainability. The IT leaders should find ways to overcome these barriers and should be willing and ready to support business changes. The solutions include, revisiting their application design principles with a view to ensure that all their current and future custom applications are Service Oriented and are highly scalable, maintainable and performing. For other legacy systems, explore options to service enable them using appropriate tools and technologies, without changing systems themselves.


Focus on Value Delivery

Though traditionally IT has been a cost centre, most IT leaders have shown interest in treating IT as a profit centre. Most IT investments, though are evaluated in terms of the return (value) that this investment brings back, this is not monitored throughout its execution. Ideally, the focus on value should not be lost during the execution phase. This is true as the discoveries or problems encountered as the project execution progress may have a significant impact on the perceived value and in such cases, it would be wiser to take call to fail the project and call off further investment without waiting for the end result.

When something is offered for free, everyone will want it whether irrespective of there being a real use for it. Similarly, applying the 80/20 rule, 80% of the business functions are likely consume only 20% of the IT services. There need to be a method or process to keep accounting the service offerings and identify the 20% of services and prioritize the support for these services in terms of taking up changes around these and delivering them faster than expected by the business.

Bringing in a culture (at the least within the IT function) wherein the need for focus on value delivery is well understood and demonstrated by all would certainly help achieve greater benefits overall. Every member should know and be aware of the expected business value of every project or sub projects and that they are associated with and should take pride in ensuring that their actions in fact result in the business enjoying the perceived value.

IT Leaders should devise suitable process or systems which will help measure everything and use it in turn to calculate and publish the metrics or statistics around the business value delivered by different projects or investments. IT Governance frameworks like COBIT can help achieving this.


Communicate & Collaborate

IT leaders normally express their point of view technically, which the business users or leaders may not get it right and eventually the value proposition might not be understood well. This is where IT leaders should start putting across their proposals or point of views in a way that make sense to the business leaders. While the converse is also true that while Business leaders talk about business changes, IT leaders find it difficult to understand, which IT leaders should overcome. IT is important that the IT Leaders and the most part for their team should be willing to acquire the required business skills and should demonstrate the same in their communication and deliveries.

Similarly, it is important for the IT leaders to collaborate with the business proposals and get involved right from the initial stages, so that they are able to get to know the business requirements and priorities better and at the same time present them back with the various risks and caveats that related tools and technology that enables this change may bring in for them to manage.


Talent Development

With the technology landscape changing rapidly, and the business leaders are looking for such enabling technologies to gain competitive advantage or to improve the efficiencies at various levels, the IT team has a pressing need to cope up with such needs. This is where, IT leaders should now look for people with multiple technical and business skills and with the willingness and ability to learn newer technology and business skills faster. This should be best achieved through mentoring and not by force.

IT leaders together with the HR leaders should also provide the employees an environment, which is conducive to develop the abilities of the employees. The organization culture should also envision the need for continuous learning and devise a system to measure and monitor the efforts spent in learning. For instance, depending on the role, the employees may be asked to log certain number of learning hours in a year on specified technical and business areas.

The IT leaders should also be continuously learning and stay on top of the technology trends, so that they can identify the right technology and tools that can improve the service capabilities of the business functions and in turn could give competitive advantage.



Right strategies around these four areas would certainly help IT leaders stay focussed in the business benefits and in turn demonstrate measurable value on IT investments.

Sunday, October 14, 2012

Application Architecture Review - Security

In continuation of the Architecture review series, let us focus on security review in this blog. With information security breaches hitting the news headlines quite frequently, many enterprises are realizing the real need to manage this security risk and be resilient. As such, it is possible that as an Architect, you might have been called for to perform a security review of the existing applications. I have tried to put together the following areas of concern, which need a closer look to form an opinion whether the application architecture is secure enough.
 
 
In general the broad areas of concern for the security architects should be the following:
 
 
Authentication – Review the tools, technology and the approach used by the application to establish the identity of the application users for possible deficiencies. In this connection the following specific areas need attention.
  • Look for identification of the legitimate human and system users of the application in the requirements document which in turn are validated with appropriate business scenarios. 
  • If the application exposes interface to external systems, understand how access by such systems are identified and authenticated. Also understand how secure such other external systems are and if possible ask for a security assessment of such other systems.
  • Identify how users are authenticated, whether two factor or three factor authentication.
  • Check if Single Sign On is implemented and in such case, understand how it is implemented, what tools and technology are used. If the Identity provider is external to the system boundary, then also check how the information in transit between the identity provider and the application is secured.
  • In case of external identity providers, it would also be worth checking the security practices followed by the Service Provider and whether they are being subject to regular external independent security assessment.
  • If the application maintains the user information locally and authenticates against it, ensure whether identity related data is secured appropriately from unauthorized access.
  • It would also be worth understanding how the database servers authenticate the application or the application user. If the application users happen to be the users of the database as well, then the mechanisms implemented to prevent such users directly accessing the database needs to be scrutinized.
 
 
Authorization - Each of the identified human or system users would be operating on the application by assuming defined roles and the authorization to access various components or information should be dependent on such roles. Get a clear view of how the roles and authorization are implemented in the application. The following specific areas are worth the attention in this regard.
  • Check if there exists an information sensitivity policy or information privacy policy as relevant to the data or information being accessed or managed by the application.
  • Understand how the defined roles are mapped to the various datasets in terms of the permission to the Create, Read, Update and Delete. It would also be good to examine the various roles defined by the organization whether they are in line with that of the principles of segregation of duties and look for how the users with multiple or overlapping roles are handled by the system.
  • With a view to improve application performance, developers tend to create interfaces (both visual and non-visual) in such a way they are chunky as against being chatty. While this holds good in terms of application performance, the datasets being served need to be reviewed with respect to the information sensitivity and the role based permission restrictions should be applied to all internal and external APIs and interfaces.
 
 
Availability / Scalabiltiy – Systems are designed to process data in the expected and timely manner so that the information users make the most of it, and perform the business operations efficiently and effectively. The general experience is that the systems perform very well in the initial testing phase and when it is deployed in production its behaviour could be different and might slow down considerably due to various environment and load related issues. As an architect it is essential that the proper estimation is done for expected user and data growth and the application is designed to meet such needs. Examination of the following areas might reveal how the application meets this concern.
 
 
Auditability – The systems should be designed to log certain events, which could be potentially lead to security breach. These logs should be readable when needed by users with appropriate roles and should be monitored periodically. Event alerts also help notifying the administrators on the occurrence of certain type of events, which may require immediate attention. Examine the following areas of the application design to form an opinion on this concern.
  • Review the Application architecture to understand how the event alert and logging mechanism is implemented. 
  • Review for completeness of various events that are being handled and the relevant data is being logged. Examine if any sensitive data is being logged and if so, whether role based access restrictions is also implemented around the log data. 
  • Check how the event log data is organized and stored and also look for existence of any policy or procedures around managing such log data.
  • Understand the regulatory needs, which many times govern the data to be logged and how long such log data need to be retained.
  • The log data grows too fast and many times if the storage of log data is within the same production database of the application, there is a possibility that this growth may impact the performance of the application itself impacting the Availability needs. Depending on the volume and growth rate of the data, ensure that the chosen tools and technology is adequate and appropriate.
 
 
This blog is not an exhaustive checklist and just intended to bring out the broad concerns which at a minimum should be considered in the Architecture Review. TOGAF 9.1 has in its ADM Guidelines and Techniques has listed the design considerations with respect to building security as part of the design and architecture. These security design considerations can be used for an exhaustive security review, which also covers the implementation, change management and the IT infrastructure.

Also check out my own blog titled as Building Secure Application, which is abour making security part of the SDLC.

Sunday, September 30, 2012

Building Secure Applications

The awareness on security has increased manifold and thanks to the frequent news headlines on security breaches and compromises leaving organizations with heavy damage and / or loss. The board has started realizing that information security is one of the primary causes of most of the IT Risks that their organization has to mitigate or to be ready to face. But the application design and development processes has not seen significant change to ensure that the delivered application is secure and would help the organization bring down the risk exposure.
 
Most of the software engineers and architects involved in application design and development still have very less security awareness and they believe that that security is something that the IT Infrastructure team will take care of. On the other hand, the IT infrastructure team believes in the investment they would have made in robust security tools and expect that they have done their part to protect the organization and pass on any application specific security issues back to the development teams. Then, whose responsibility it is to ensure building a secure application?
 
This is where an independent, enterprise wide security consulting team needs to extend the security consulting to various project teams within the organization. But still, this will not solve the problem completely, as it is for the development teams to be security aware and design and build security applications. 
 
Here is what the security consulting team can offer to the project teams:
 
Pre-project security reviews: This helps the decision makers to be aware of what security threat landscape that they are going to be dealing with upon implementation of the project. This could even have an impact on the project costing as additional cost might have to be incurred to mitigate the security risks that this project may bring on and in most cases it is not just one time investment and it has to be recurring operating expenses as well. This helps the organization to make informed decisions considering the extent of impact on the organization’s risk exposure.
 
 
Design level security reviews: Like in case of software quality, earlier one spot the security concerns, it would cheaper and easier to factor in the mitigation plans. At this stage, the security consulting teams can help the project teams in the following ways:
  • A high level evaluation of the security concerns that the design could expose and suggest possible security controls to mitigate such concerns.
  • Equip the project stake holders with necessary information associated with the identified security concerns, so that they can take better risk management decisions.
  • Extend guidance to the project teams on the choice of controls and solutions that might best address the security concerns.
  • Perform research and exploration on any the technology or feature is quite new and innovative which could potentially open a new thread landscape
  • Offer training to the design and development teams about the most common security vulnerabilities and the attack patterns, so that they design and build counter measures early on.
Implementation Security Reviews: Security consulting team offers to perform the reviews and verifications in the later phase of the application development, so that vulnerabilities if any exist may not slip through to production. Typically the services offered by the security consulting team are one or more of the following:
  • Ensure that the security concerns identified in the design review is fully addressed.
  • Perform security specific reviews on the code and this could be on a sampling basis, depending on the acceptable risk level of the application.
  • Use automated tools that will examine the code and / or the packaged component or application.
  • Review the security specific test cases as created by the software testing team and suggest additional test cases to ensure better coverage in the security testing.
Having a security consulting team will not just be enough to build secure applications. The Software Engineering Process which defines and describes the approach and methodologies used for project execution should also mandate for practising and consuming the services of the Security Consulting teams and their review reports should be identified as entry and exit criteria for various phases of the application development.
 
Security education is also required to ensure that the project team is aware why security is so important for the organization. The following measures towards security education would help the teams to design and develop secure applications:
  • Include a module of security education in the employee induction program, so that every new employee is oriented towards the security needs of the organization.
  • Offer in-depth security training to architects and select software developers and ensure that every project team has at least couple of such trained resources. 
  • Ensure that the coding guidelines document also factors writing secure code.
  • Hold periodic training sessions where in addition to security related presentations, recent security review findings and or defects found during the security testing can be taken up for discussion and in the end come up with solutions to prevent such issues creeping in.
  • If the organization publishes a periodic news bulletin, ensure that security is a mandated section in it and use it to publish security related updates.
As it can be observed, security is not just one person or department’s responsibility and it should be the outcome of collaborated efforts of concerned teams with the common goal to build a secure application. The security consulting team can also be external to the organization, i.e. this function can be outsourced to security consuling firms who would be specializing in the area of application security practice.

Sunday, September 23, 2012

Data De-identification Dilemma

De-identification is a process of removing various elements of the dataset, so that the data row would cease to be personally identifiable to an individual. This is all about protecting the privacy of the users of systems as backed by legislations prevalent in many countries. While HIPAA in the US is the most known act that provides for protection of personally identifiable data, many other countries also have promulgated legislations to regulates the handling of such data in varying degrees.
 
Most organizations are increasingly becoming security aware as they are getting impacted by the related risks of not appropriately protecting the data and information assets. For the purpose this discussion we can assume that appropriate checks and controls are in place for data in the active store. But the cloud evolution and increasing integration of external systems requires that the data when exchanged or disclosed to any interconnected system or stored elsewhere on the cloud to support different needs including back up or business analytics requires that such datasets that is so disclosed or stored elsewhere need to be de-identified, so that the privacy interests of the such individuals are protected and in turn comply with applicable privacy legislations.
  
Under HIPAA, individually identifiable health information is de-identified if the following specific fields of data are removed or generalized:
  • Names
  • Geographic subdivisions smaller than a state
  • All elements of dates (except year) related to an individual (including dates of admission, discharge, birth, death)
  • Telephone & FAX numbers
  • Email addresses
  • Social security numbers
  • Medical record numbers
  • Health plan beneficiary numbers
  • Account numbers
  • Certificate / license numbers
  • Vehicle identifiers and serial numbers including license plates
  • Device identifiers and serial numbers
  • Web URLs
  • Internet protocol addresses
  • Biometric identifiers (including finger and voice prints)
  • Full face photos and comparable images
  • Any unique identifying number, characteristic or code
In today’s context, a vast amount of personal information is becoming available from various public and private sources all around the world, which include public records like, telephone directories, property records, voters register and even the social networking sites. The chances of using these data to link against de-identified data and there by being able to re-identify the individual is high. Professor Sweeney testified that there is a 0.04% chance that data de-identified under the health rule’s methodology could be re-identified when compared to voter registration records for a confined population.
 
Others have also written about the shortcomings of de-identification. A June 2010 article by Arvind Narayanan and Vitaly Shmatikov offers a broad and general conclusion:  
The emergence of powerful re-identification algorithms demonstrates not just a flaw in a specific anonymization technique(s), but the fundamental inadequacy of the entire privacy protection paradigm based on “de-identifying” the data.
 
With various tools and technologies, it may be possible at times to achieve probably absolute de-identification. However, it seems unlikely that there is a general solution that will work for all types of data, all types of users, and all types of activities. Thus, we continue to face the possibility that de-identified personal data shared for research and other purposes may be subject to re-identification.
 
There is a wide variance in the regulatory requirement on the subject amongst various legislations. While some require removal of specific data fields, some mandates for adherence to certain administrative processes and few others require compliance to one or more standards.
 
Robert Gellman in his paper titled as The deidentification dilemma: A legislative and contractual proposal, calls for a contractual solution, backed by a new legislation. However, irrespective of it being backed by legislation or not it would be wise to follow this approach as it helps bind the data recipients to the requirements of the data discloser. With the use of SaaS applications on the rise the chances of the data being stored elsewhere and being on the wire is very high. The increasing need for data and application integrations over the cloud across various partner organizations is again makes the need for such a contractual solution a must.
 
The core proposal in the legislation is a voluntary data agreement, which is a contract between a data discloser and a data recipient. The PDDA will only apply to those who choose to accept its terms and penalties through a data agreement. The PDDA establishes standards for behaviour and civil and criminal penalties for violations. In exchange, there are benefits to the discloser and recipient.
 
With the above requirement and understanding on the de-identification of data, let us list down the possible circumstances, which will mandate data de-identification as below:
  • All non production database instances, which includes the development, test, training and production support instances of the databases as may be maintained by an organization. It is quite prevalent that the DBAs do maintain and run scripts to anonymize the personal data before such instance is exposed for general use by the intended users. But it is also important to ensure that the anonymization is in line with regulatory requirements of the region depending upon where such instances are hosted.
  • The increased use of business analytics call for maintenance of one or more data marts, which happens to be a replica of the production database. While it would absolutely fine, if such data marts store data summarized at a level such that each row does not represent one individual, care has to be taken just in case the micro level data is also maintained in the mart to facilitate drill through.
  • Application controls – All systems that work with databases containing personally identifiable information should be designed in such a way that appropriate access controls are built in to protect the sensitive information from being displayed or extracted out.
  • Remote workers & mobility challenges – Organizations have started accepting the culture of remote working and employee mobility. That means that the employees would be accessing the data through one or more applications from anywhere in the world using a multitude of devices. This call for an appropriate policy, checks and controls to be compliant with the privacy legislations.
  • Partner systems – In today’s connected world, business partners, who might be customers or vendors or even contracted outsourced service providers to gain access to the systems and databases of the organization. This certainly calls for a careful evaluation of the culture and voluntary agreement by such parties to be compliant with the organization’s data privacy needs. This even calls for periodic training and audit for the employees and systems of such partner organization.
Today’s lack of clear definitions, de-identification procedures, and legal certainty can impede some useful data sharing. It can also affect privacy of users when the lack of clarity about de-identification results in sharing of identifiable data that could have been avoided. The approach proposed by Robert Gellman will make available a new tool that fairly balances the needs and interests of data disclosers, data users, and data subjects. The solution could be invoked voluntarily by data disclosers and

data recipients. Its use could also be mandated by regulation or legislation seeking to allow broader use of personal data for beneficial purposes.


Reference:
The Deidentification Dilemma: A Legislative and Contractual Proposal
-- Robert Gellman - Version 2.4, July 12, 2010









Saturday, September 22, 2012

Cloud Computing - Governance Challenges


I recently happened to read a Technical Note published in October 2006 by Software Engineering Institute titled as ‘System-of-Systems Governance: New Patterns of Thought’. The technical note was primarily aimed at organizations like Department of Defence, where in multiple organizations and systems need to collaborate to form part of DoD as the bigger enterprise. The Note discusses about some of the key areas where the traditional Governance would need a review and revision to handle the very nature of the System-of-Systems.

CIOs are in favour of embracing cloud and as such would have contracted for multiple software systems (SaaS) for different needs, for instance Salesforce for its CRM needs, ServiceNow for its IT Service Management, Windows Azure for custom application development, NetSuite for its ERP needs and so on. Similarly the IT organization would have contracted with different cloud service providers for its Infrastructure (IaaS), Storage and Platform (PaaS) needs. The XaaS list is growing as we are now seeing offerings from vendors for Database as a Service, Security as a Service, Identity as a Service, etc. With all this multiple contracted system components, the IT organization certainly has challenges in implementing governance practices, as there are diverse systems components forming part of the larger system.

In today’s world, with increasing cloud adoption and outsourcing activities, we could see business organizations are in almost in the same state as it is described of DoD, i.e. System-of-Systems and the key governance challenges very much hold good to be addressed. The following are the five amongst the six areas that need to be looked into to realign the governance practice within an enterprise:

Collaboration and Authority

With systems and components owned by various organizations being in use, even if owners of constituent systems are unusually committed to the system of systems, a single authority is likely to be ineffective. And if authority is essential to the enforcement of IT policy, then without sufficient or inadequate authority independent vendor organizations can always be expected to have reluctance in adopting shared policies. Collaboration amongst the constituent system owners is required at least in problem solving, participating in decision making process, coming up with provisional solutions to meet an emergency need. While the cloud computing is still maturing, there is a need for standards to emerge which should facilitate the necessary collaboration both technically and in terms of policy federation.

Motivation and Accountability

There should be motivation for anyone to adhere to or adapt to a shared policy. Enforcing such shared standards or policies across independent vendors would not work as effectively. The Technical Note refers to Zadek’s five stages of learning that organizations go through to achieve the benefit of voluntary collaboration. These are:

Clic on the image to view a better image

The table above not only shows what motivates organizations at different stages but also reveals what they may need to learn. For example, a defensive organization that claims common system-of-systems practices are irrelevant may need to be educated about threats to its reputation due to its lack of voluntary compliance. At all stages, we need policies to give individuals and organizations the incentive to do the right thing.

Multiple Models

A simple example to highlight how this area is significant could be the security implementation within the component systems. Each component and system vendor may have different security models implemented within the individual systems and that complicates the governance of the overall organization challenging. This calls for the need for a dynamic governance model, which can map and interact between different models of the individual components. While security is just one example there are other areas where the components are designed and modelled differently. While framing the Governance policies to suit all such systems certainly is not the good approach, the governance framework should provide for variables based on type of systems or type of service or a similar category.

Expectation of Evolution

This can be easily related to change and release management of independent vendor systems and the change of the common infrastructure of the enterprise itself. If governance cannot eliminate the independent evolution of components within the system of systems, it should aim to reduce the harmful effects of uncontrolled evolution by the component systems. Thus, policies must be created and enforced to provide rules and guidance for components as they change.

At a minimum, governance for evolution should include rules and guidelines for

  • informing other components systems (when known) 12 of the changes in the interfaces to and functionality of one system
  • coordinating schedules with other component systems so that those that have to change can do so together (when backward compatibility of interfaces cannot be maintained)
  • maintaining multiple versions of the system when schedules cannot be coordinated
  • developing each system to insulate it from changes in other component systems
  • minimizing the perturbations to interfaces when changing a system

Highly Fluid Processes

Agility is the order of the day within every function of the enterprise. Being agile and responding to changes quickly gives the competitive edge to the enterprises and that is equally applicable for the Governance Framework as well.

Planning for rapid changes in system-of-systems governance is needed. For example, governance strategies may provide a mechanism for adapting to rapid policy change, such as a way to relax security policies to achieve some urgent goal and then tighten them up again. Governance policies should be framed in such a way that those around the systems or components which is likely to see problems or changes too quickly should have flexible policies and whereas as we move away from such systems, it should be relatively stable. For example, a neighbourhood of closely related systems might be the first to notice a problem with a current component or process and will need to respond quickly. At the extreme, where neighbourhoods of related systems are themselves fluid, some details of system-of-systems governance policies might be negotiated.

Summary

As with the technical note, I have not attempted to suggest solution for the governance challenges listed above. Organizations like ISACA have been researching on Governance issues and its widely adopted COBIT framework would have solutions for these problems, which we will explore in my future blog posts.

References:
SEI's Technical Note - System-of-Systems Governance