Showing posts with label complexity. Show all posts
Showing posts with label complexity. Show all posts

Saturday, October 1, 2016

DNS Security Extensions - Complexities To Be Aware Of

The Domain Name System (DNS) primarily offers a distributed database storing typed values by name.  The DNS acts like a phone book for the Internet, translating IP addresses into human-readable addresses. Obviously, as close to 100% of the internet requests are by the domain names, requiring the DNS servers resolve the domain names into IP addresses. This results in a very high load on the DNS servers located across the world. In order to support such a high frequency of requests, DNS employs a tree-wise hierarchy in both name and database structure. 


However, the wide-open nature of DNS leaves it susceptible to DNS hijacking and DNS cache poisoning attacks to redirect users to a different address than where they intended to go. This means that despite entering the correct web address, the user might be taken to a different website.DNS Secrutity Extension (DNSSEC) was brought in as the answer to the above problem.


DNSSEC is designed to protect Internet resolvers (clients) from forged DNS in order to prevent DNS tampering. DNSSEC offers protection against spoofing of DNS data by providing origin authentication, ensuring data integrity and authentication of non-existence by using public-key cryptography. It digitally signs the information published by the DNS with a set of cryptographic keys, making it harder to fake, and thus more secure.


The DNSSEC brings in certain additional records to be added to the DNS. The new record types are: RRSIG (for digital signature), DNSKEY (the public key), DS (Delegation Signer), and NSEC (pointer to next secure record). The new message header bits are: AD (for authenticated data) and CD (checking disabled). A DNSSEC validating resolver uses these records and public key (asymmetric) cryptography to prove the integrity of the DNS data. 


A hash of the public DNSKEY is stored in a DS record. This is stored in the parent zone. The validating resolver retrieves from the parent the DS record and its corresponding signature (RRSIG) and public key (DNSKEY); a hash of that public key is available from its parent. This becomes a chain of trust — also called an authentication chain. The validating resolver is configured with a trust anchor — this is the starting point which refers to a signed zone. The trust anchor is a DNSKEY or DS record and should be securely retrieved from a trusted source.


The successful implementation DNSSEC depends on the deployment of the same at all levels of the DNS architecture and the adoption by all involved in the DNS resolution process. One big step was given in July 2010 when the DNS root zone was signed. Since then, resolvers are enabled to configure the root zone as a trusted anchor which allows the validation of the complete chain of trust for the first time.  The introduction and use of DNSSEC has been controversial for over a decade due to its cost and complexity. However, its usage and adoption is steadily growing and in 2014, DNS overseer ICANN determined that all new generic top-level domains would have to use DNSSEC.


Implementing DNSSEC is not always unproblematic. Some faults in DNS are only visible in DNSSEC – and then only when validating making the debugging the DNSSEC difficult. DNS software that apply only to DNSSEC has many issues to be plugged, leading to disruptions in service.
Interoperability amongst the DNS software is another issue that is adding to the problems. Above all, attackers can abuse improperly configured DNSSEC domains to launch denial-of-service attacks. The following are some such major complexities that one should be aware of.


Zone Content Exposure

DNS is split into smaller pieces called zones. A zone typically starts at a domain name, and contains all records pertaining to the subdomains. Each zone is managed by a single manager. For example, kannan-subbiah.com is a zone containing all DNS records for kannan-subbiah.com and its subdomains (e.g. www.kannan-subbiah.com, links.kannan-subbiah.com). Unlinke DNS, with DNSSEC the requests will be at the signed zone level. As such, enabling DNSSEC may expose otherwise obscured zone content. Subdomains are sometimes used as login portals or other services that the site owner wants to keep private. A site owner may not want to reveal that “secretbackdoor.example.com” exists in order to protect that site from attackers.


Non-Existent Domains

Unlike standard DNS, where the server returns an unsigned NXDOMAIN (Non-Existent Domain) response when a subdomain does not exist, DNSSEC guarantees that every answer is signed. For statically signed zones, there are, by definition, a fixed number of records. Since each NSEC record points to the next, this results in a finite ‘ring’ of NSEC records that covers all the subdomains. This technique may unveils internal records if zone is not configured properly.The information that can be obtained can help us to map network hosts by enumerating the contents of a zone.


The NSEC3-walking attack

DNSSEC has undergone revisions on multiple occasions and NSEC3 is the current replacement for NSEC. "NSEC3 walking" is an easy privacy-violating attack against the current version of DNSSEC. After a few rounds of requests to a DNSSEC server, the attacker can collect a list of hashes of existing names. The attacker can then guess a name, hash the guess, check whether the hash is in the list, and repeat.  Compared to normal DNS, current DNSSEC (with NSEC3) makes privacy violations thousands of times faster for casual attackers, or millions of times faster for serious attackers. It also makes the privacy violations practically silent: the attackers are guessing names in secret, rather than flooding the legitimate servers with guesses. NSEC3 is advertised as being much better than NSEC. 


Key Management

DNSSEC was designed to operate in various modes, each providing different security, performance and convenience tradeoffs. Live signing solves the zone content exposure problem in exchange for less secure key management. The most common DNSSEC mode is offline signing of static zones. This allows the signing system to be highly protected from external threats by keeping the private keys on a machine that is not connected to the network. This operating model works well when the DNS information does not change often.

Key management for DNSSEC is similar to key management for TLS and has similar challenges. Enterprises that decide to manage DNSSEC internally need to generate and manage two sets of cryptographic keys – the Key Signing Key (KSK), critical in establishing the chain of trust, and the Zone Signing Key (ZSK), used to sign the domain name’s zone. Both types of keys need to be changed periodically in order to maintain their integrity. The more frequently a key is changed, the less material an attacker has to help him perform the cryptanalysis that would be required to reverse-engineer the private key.  

An attacker could decide to launch a Denial of Service (DoS) attack at the time of key rollover. That is why it is recommended to introduce some "jitter" into the rollover plan by introducing a small random element to the schedule. Instead of rolling the ZSK every 90 days like clockwork, a time within a 10-day window either side may be picked, so that it is not predictable.


Reflection/Amplification Threat

DNSSEC works over UDP, and the answers to DNS queries can be very long, containing multiple DNSKEY and RRSIG records. This is an attractive target for attackers since it allows them to ‘amplify’ their reflection attacks. If a small volume of spoofed UDP DNSSEC requests is sent to nameservers, the victim will receive a large volume of reflected traffic. Sometimes this is enough to overwhelm the victim’s server, and cause a denial of service. Specifically, an attacker sends a corrupted network packet to a certain server that then reflects it back to the victim. Using flaws in DNSSEC, it is possible to use that extra-large response as a way to amplify the number of packets sent – anywhere up to 100 times. That makes it an extremely effective tool in efforts to take servers offline.



The problem isn't with DNSSEC or its functionality, but rather how it's administered and deployed. DNSSEC is the best way to combat DNS hijacking, but the complexity of the signatures increases the possibility of administrators making mistakes. DNS is already susceptible to amplification attacks because there aren't a lot of ways to weed out fake traffic sources.


"DNSSEC prevents the manipulation of DNS record responses where a malicious actor could potentially send users to its own site. This extra security offered by DNSSEC comes at a price as attackers can leverage the larger domain sizes for DNS amplification attacks," Akamai said in a report.

Friday, June 19, 2015

Information Security - Reducing Complexity


Change is constant and we are seeing that everything around us are evolving. Primarily, the evolution is happening on the following categories:

Threats:

There is a drastic change in the threat landscape between now and the 1980s or even 1990s. Between 1980 and 2000, a good anti-virus and firewall solution was considered well enough for an organization. But now those are not just enough and the hackers are using sophisticated tools, technology and sills to attack the organizations. The motive behind hacking has also evolved and in that front, we see that hacking, though illegal is a commercially viable profession or business. 

Compliance:

With the pace at which the Threat landscape is evolving, governments have reasons to be concerned much as they are increasingly leveraging the technology to better serve the citizens and thus giving room for an increased security risk. To combat such challenges, Governments have come up with regulatory compliance requirements making it even complex for the CSOs of enterprises.

Technology:

Technology is evolving at a much faster pace and as we are experiencing, we are seeing that the things around us are getting smarter with the ability to connect and communicate to internet. On the other side, considerable progress have been achieved in the Artificial Intelligence, Machine Learning, etc. These newer ‘smarter things’ are adding up to the complexity as the CSOs of the have to handle the threats that these bring on to the surface.

Needless to mention that the hackers too make the best use of the technology evolution and thus improving their attack capabilities day by day.

Business Needs:

The driver of adoption of these evolution is the business need. As businesses want to stay ahead of the competition, they leverage the evolving technologies and surge ahead of the competition. With a shorter time to market, all departments, including the security organization should be capable of accepting and implementing such changes at faster pace. Due to this time pressure, there is a tendency to look for easier and quicker ways to implement changes ignoring the best practices.


Consumerization

IT today is to simplify things to the consumers within and outside the organization and this raises the user expectation and thus leading to too many changes with some being unrealistic as well. This may include the users bringing their own anything (BYOA). This will soon include Bring Your Own Identity with chips implanted under the skin. As you would know, employees who work at the new high tech office campus in Sweden, EpiCenter can wave their hands to open doors, with an RFID chip implanted under the skin.

Connected world

Most enterprises are now connected with their business partners in terms for exchanging business data. With this the IT System perimeter extends to that of the partners’ as well to some extent. Rules and polices had to be relaxed to support such connected systems. Now that we are looking at things that we use every day will transform as connected things, adding up to the complexity.

Big data

Basically the need for big data tools to handle this. While this complexity did exist earlier, the attacks were not that sophisticated then. Today with the level of sophistication on the attack surface, the need for simplifying complexity of handling huge data is very much required.

Skillset

The threat landscape is widening and the attacks are getting sophisticated, which call for even better tools and technologies to be used to prevent or counter them. This means that there is a continuous change in the method, approach, tools and technology used, making it difficult to maintain and manage the skills of the human resources.

Application Eco System

A midsized organization will have hundreds of applications, needing to have different exceptions to the policies and rules. These applications may in turn use third party components and thus the chances of a vulnerability within these applications is very high. Given that these applications constantly undergo change and evolve, there is a possibility that the code or component left behind might expose a vulnerability.


How does this impact

Complexity impacts the security capability in many ways and the following are some:

Accuracy in Detection

The complexity makes the detection of a compromise difficult. Having to handle and correlating large volume of logs from different devices and that too different vendors will always be a challenge and this makes timely and accurate detection a remote possibility. A successful counter measure require accurate detection in the pre-infection or atleast in the infection stage. The later it is detected, it is complex to counter the same.

Resources

Each new security technology requires people to properly deploy, operate and maintain it. But it is difficult to add new heads to the Security Organization as and when a new tool or technology is considered. Similarly, managing the legacy solutions put in by older employees who are no longer employed in the organizaiton is likely to remain untouched due to the fear of breaking certain things.

Vulnerabilities and Exposures

With the huge number of applications used by the enterprise, this is a complex and huge exercise, unless the same is integrated into the build and delivery process by mandating a security vulnerability assessment. With innumerable number of applications, components, and the operating systems connecting to the enterprise network, this is almost impossible. Needless to mention that with the wearables and other smarter things connection to the network, who knows, what vulnerability exist in such smarter things and in turn exploited by hackers.

Methods for reducing complexity

Complexity is certainly bad and reducing complexity will beneficial both in terms of cost and otherwise. However, simplification by any means should not result in compromising the needed detection and protection abilities. A balanced approach is necessary so that the risk, cost and complexity are well balanced and beneficial to the organization. The following are some of the methods that may help reduce the complexity:

  • Integrated processes as against isolated security processes. Every Business process should have the security related processes integrated within, so that every person in the organization will by default contribute towards security. The security process framework shall be designed in such a manner that it evolves over a period based on experience and feedback.
  • Practicing Agile approach within the security organization, so that the complexity is hidden within tools and appliances by automating the same. Agile approach also helps the security organization to embrace changes faster, especially, when implementing changes in response to a detected threat or compromise. One has to carefully adopt such practices into the Security framework.
  • Outsourcing the security operations to Managed Security Service Providers(MSSP) is certainly an option for small and medium enterprises that brings takes some of the complexity away and thus benefits the organization. Needless to mention here that outsourcing does not absolve the responsibility of the security organization from any security incident or breach.
  • “Shrinking the Rack” – Consolidating technologies whereby devices combining multiple technology and capability within it may make it easier for deployment and administration. At the same time this has the risk of ‘having all eggs in one basket’, i.e. when such a device or solution is hacked, then it is far and wide open for the hackers.
  • Mandating periodical code, component and process refactoring, where by unneeded legacy code, component and process are periodically reviewed and removed from the system. This will help keeping the applications maintainable and secure. Also implant security as a culture amongst all the employees, so that they handle security indicators responsibly.

Sunday, August 24, 2014

Perspectives of Business Reference Model

We are all witnessing the steady progress of the Enterprise Architecture(EA) discipline and it is now well understood that the EA is not just about IT infrastructure and the Business Architecture(BA) forms an integral part of EA. Unlike in the past, when Business Architecture was used for the purpose of eliciting the requirements for the IT systems, BA is used to develop and describe the targe business model and work on a road map that will get the business towards the target. The Open Group, as part of its "World Class EA" series, has published a White Paper on the Buiness Reference with an objective of providing the need help to organizations in developing BA assets and plan for the future.


The Open Group has developed the Business Reference Model to facilitate description of a business model through the five perspectives. The following diagram provides an overview of the structure and content of the BRM:

Image Source: The Open Group's World Class EA: Business Reference Model white paper.


Environment Perspective:

The Environment perspective addresses the context within which an organization must operate. It describes the external factors, such as the competitors and customers for an organization, in addition to the pre-established strategy defined by the organization for market positioning. This perspective is intended to describe why an organization is motivated to undertake particular courses of action.

The goal of understanding the business environment is to provide a good contextual knowledge base that informs the creation of effective architectures in the Value Proposition, Operating Model, and Risk perspectives.

The business challenge is to gain and exploit insights into the market, competition, and customer base that allow the organization to position itself optimally (described through strategy).


Value Proposition Perspective:

The Value Proposition perspective describes the offering produced by the organization in terms of products, services, brand, and shareholder value. It creates a belief from the existing customer, prospective costumer, stakeholder, or other constituent groups within or outside the organization where the value will be experienced – usually in exchange for economic value or some form of compensation.

The goal of understanding the value proposition is that it defines the customer experience and sets shareholder expectations. The value proposition also provides a baseline set of needs that need to be fulfilled by the Operating Model perspective. 

The business challenge is to develop a value proposition that is able to attract a suitable customer base, fulfil the needs of the customer base effectively, and generate sufficient benefit to satisfy shareholder expectations. All this needs to be achieved in a way that is consistent with, and reinforces, brand image and brand values.


The Operating Model Perspective:

The Operating Model perspective describes the resources at the disposal of the organization that will be deployed to generate the value proposition. This perspective is intended to describe how an organization will be able to deliver on its value proposition. Capabilities are the core enablers to operate the business from the perspectives of people, process, technology, and information.

The goal of operating model design is to allow executives and planners to evaluate the business through a wide variety of lenses and viewpoints in order to identify desired and enhanced states of the organization.

The business challenge is to identify the correct alignment of resources that will deliver the necessary customer and shareholder experience. Typical trade-offs to evaluate when structuring capabilities include centralization versus federation, matrix organization structures versus vertical integration, core versus context analysis, and process alignment versus competency alignment. The results of these trade-offs will produce different levels of efficiency versus agility versus stakeholder experience across different areas of the business.

The Risk Perspective

The Risk perspective identifies the uncertainties that may surround an organization in its delivery of the value proposition. This perspective is intended to describe the threats that face an organization from within and without. Typically, organizations model their architecture around the known, repeatable aspects of business operations. However, within a complex and volatile environment, unforeseen circumstances frequently occur in ways that may be extremely damaging to the business.

The goal of risk analysis is to gain a full understanding of potential scenarios that may adversely impact the business and then to prepare appropriately to address those risks in the event that they occur.
The business challenge of risk modelling is to ensure that risks are adequately understood (it is a great challenge to test for completeness in an exercise of identifying unlikely or unforeseeable scenarios), the impact of risk is appropriately quantified (again, challenging to accurately determine when there is limited precedent), and the mitigation steps for risks are appropriate to the risk level (in many organizations, over-compensation for risk can be as damaging as under-compensation, as valuable business activities are curtailed due to risk concerns).


The Compliance Perspective

The Compliance perspective represents activities that the organization must carry out in order to assure that the value proposition is delivered using an acceptable standard of business practice. This perspective is intended to describe the constraints that prevent an organization from acting in negative, destructive, or inappropriate ways. In many cases, compliance can offer opportunities for organizations to differentiate, by being first to access new markets by being compliant with new legislation.

The Compliance perspective acts in a similar manner to the Environment perspective in that it influences across value proposition, operating model, and risk, constraining all activities of the business to be in compliance with standards of acceptability.

The goal of the compliance architecture is to adequately understand the compliance requirements that exist and to ensure that appropriate mechanisms are in place to ensure they are met.

The business challenge of compliance is to appropriately translate commercial, quality, ethical, legal, and regulatory constraints (which tend to be complex and open to interpretation) into a set of clear, unambiguous operational policies that can be followed consistently and at scale within a large organization. Interpretations that are too risk-seeking in nature will tend to generate compliance breaches, with associated financial and reputational penalties. Interpretations that are too risk-averse will tend to stifle business activities and reduce the ability of the business to change quickly to meet new environmental circumstances.


This blogs contains excerpts from the white paper "World Class EA: Business Reference Model" published by The Open Group and this white paper is available for download.

Sunday, July 20, 2014

A Checklist for Architecture & Design Review

Mostly the security requirements remain undocumented and is left to the choice or experience of the architects and developers thus leaving vulnerabilities in the application, which hackers exploit to launch an attack on the enterprise's digital assets. Security threats are on the rise and is now being considered as a Board Item as the impact of security breach is very high and could cause monetary and non monetary losses.

One of the key aspects of the IT Governance is to ensure that the investments made in software assets are optimal and there is a quantifiable return on such investments. This also means that such investment does not lead to risks that could lead to damages. Most of us are well aware that reviews play a key role in ensuring the quality of the software assets. As such, in this blog post, I have tried to come up with a checklist for reviewing the architecture and design of a software application.

While the choice of specific design best practice is interdependent on another, a careful tradeoff is necessary. For a detailed discussion on Trade off Analysis of Software Quality Attributes. Each of the checklist item listed here needs further elaboration and identification of specific practices, which will depend on the enterprise architecture and design principles of the organization.

Deployment Considerations

  • The design references the security policy of the organization and is in compliance of the same.
  • The application components are designed to comply with the various networking and other infrastructure related security restrictions like firewall rules, using appropriate secure protocols, etc.
  • The trust level with which the application accesses various resources are known and are in line with the acceptable practices.
  • The design supports the scalability requirements such as clustering, web farms, shared session management.
  • The design identifies the configuration / maintenance points, and the access to the same is manageable.
  • Communication with various local or remote components of the application is using secure protocols.
  • The design addresses performance requirements by adhering to relevant design best practices.

Application Architecture Considerations

Input Validation

  • Whether the design identifies all entry points and trust boundaries of the application.
  • Appropriate validations are in place for all inputs that comes from ourside the trust boundary.
  • The input validation strategy that the application adopted is modular and consistent.
  • The validation approach is to constrain, reject, and then sanitize input.
  • The design addresses potential canonicalization issues.
  • The design addresses SQL Injection, Cross Site Scripting and other vunerabilities
  • The design applies defense in depth to the input validation strategy by providing input validation across tiers.
Authentication
  • The design identifies the identities or roles that are used to access resources across the trust boundaries.
  • Service account or such other predefined identity requirements to, if so needed to access variuos system resources are identified and documented.
  • User credentials or authentication tokens are stored in secure manner and access to the same is appropriately controlled and managed.
  • Where the credentials are shared over the network, appropriate security protocol and encryption techniques are used.
  • Appropriate account management policies are considered.
  • In case of authentication failures, the error information displayed is minimal so that it does not reveal any clues that could make the credential guessing easier.
  • The design adopts a policy of using least-privileged accounts.
  • Password digests with salt are stored in the user store for verification.
  • Password rules are defined so that the stronger passwords are enforced.
Authorization
  • The user role design offers sufficient separation of privileges and considers authorization
  • granularity.
  • Multiple gatekeepers are envisaged for defense in depth.
  • The application’s identity is restricted in the database to access-specific stored procedures and does not have permissions to access tables directly.
  • Access to system level resources are restricted unless there is an absolute necessity.
  • Code Access Security requirements are established and considered.
Configuration Management
  • Stronger authentication and authorization is considered for access to administrration modules.
  • Secure protocols are used for remote administration of the application.
  • Configuration data is stored in a secured store and access to the same is appropriately controlled and managed
  • Least-privileged process accounts and service accounts are used.
Sensitive Data
  • Design recognizes sensitive data and considers appropriate checks and controls on the same.
  • Database connections, passwords, keys, or other secrets are not stored in plain text.
  • The design identifies the methodology to store sensitive data securely. Appropriate algorithms and
  • key sizes are used for encryption. 
  • Error logs, audit logs or such other application logs does not store sensitive data in plain text.
  • The design identifies protection mechanisms for sensitive data that is sent over the network.
Session Management
  • The contents of authentication cookies are encrypted.
  • Session lifetime is limited and times out upon expiration.
  • Session state is protected from unauthorized access.
  • Session identifiers are not passed in query strings.
Cryptography
  • Platform-level cryptography is used and it has no custom implementations.
  • The design identifies the correct cryptographic algorithm and key size for the application’s data encryption requirements.
  • The methodology to secure the encryption keys is identified and the same is in line with the acceptable best practices.
  • The design identifies and establishes the key recycle policy for the application.
Parameter Manipulation
  • All input parameters are validated including form fields, query strings, cookies, and HTTP headers.
  • Sensitive data is not passed in query strings or form fields.
  • HTTP header information is not relied on to make security decisions.
  • View state is protected using MACs.
Exception Management
  • The design outlines a standardized approach to structured exception handling across the application.
  • Application exception handling minimizes the information disclosure in case of an exception.
  • Application errors are logged to the error log, and the design provides for periodic review of such logs.
  • Sensitive data is not logged as part of the error logs, but where necessary, the same is logged with appropriate de-identification technique
Auditing and Logging
  • The design identifies the level of auditing and logging necessary for the application and identifies the key parameters to be logged and audited.
  • The design considers how to flow caller identity across multiple tiers at the operating system or application level for auditing.
  • The design identifies the storage, security, and analysis of the application log files

Saturday, March 8, 2014

The Principles of Agile Enterprise Architecture Management

Change is happening everywhere and that too at an accelerated rate not only in IT but also in many other functional areas, though the same is very high in the area of IT. Business users are encouraged to innovate in every possible area and that brings in more and more transformation projects, an important category of projects in the enterprise program and portfolio management. Many a times these transformation projects are time critical, which if not implemented on time will use the market advantage. On the same lines, technology adoption is becoming a key aspect for the success of the businesses and the predicting, tracking and embracing the upcoming disruptive technologies has become an important business and strategic risk as it can have wider impact across business strategies, capabilities and processes.

Enterprise Architecture function has equal responsibility in ensuring these changes are embraced with least impact. While running the business as it is an important aspect, enabling transformation of business capabilities and information management capabilities is another key goal for the Enterprise Architects. One of the key elements that Enterprise Architects should consider and address is complexity around business and IT Architecture management so that transformation projects get implemented at desired time schedules and thus reap the intended business benefits. While there are other key objectives like delivering stakeholder value, managing complexity is an objective that comes close to being Agile.

The Agile Enterprise Architecture is all about letting changes happen and thus keep the Architectural Principles continuously evolving. This will also call for having an appropriate lifecycle that facilitates the evolution, development and adaption of the current and the target reference architecture continuously. This will keep the maturity levels of various IT management functions also changing over time. In this blog, let us focus on the key principles that enables an Agile Enterprise Architecture Management:


Value Individuals and Interactions over Tools and Processes

It is a well established and understood fact that it's the people who build success in the enterprise, and the tools and processes are just enablers. With people being the greatest asset, the organizational culture plays an important role in motivating the employees to collaborate, innovate and deliver the results more effectively and efficiently. Build the EA team in such a way that it has representation or interface with the Top Management, Business & IT Owners, Business & IT Operations teams and the Project Teams driving the change within. Choose and deploy the right set of tools, technology and processes that facilitates the collaboration with different business and IT functions.

The EAM team shall aim for sustainable evolution, with a pace as is driven by business and IT users; Help the project teams to avoid panics, and discourage culture clashes; Understand that everyone has their own area of expertise and thus can add value to the project or program.


Focus on demands of top Stakeholders and speak their languages

Typically the top stakeholders need continuous input from the EA team on various business and IT functions, to decide on further strategic alignments or improvements, which in turn would lead to new transformation projects or change of course in case of existing projects. The inputs could be in the form of metrics, visualizations and reports. It is very important that these inputs should be relevant and make sense to the target recipients. The following key considerations are worth considering to ensure that the stakeholders realize the maximum value out of such inputs from EA teams:

  • A single number or picture is more helpful than 1000 reports
  • Avoid waste - Share information that is relevant and nothing more and nothing less.
  • Leverage the existing process to generate and deliver these inputs as against a whole set of EA specific processes.

Promote rapid feedback, by working jointly on models and architecture blue prints with other people and functions. Remember that the best way of conveying information is by a face-to-face conversation, supported by other materials. Shared development of a model, at a whiteboard, will generate excellent feedback and buy-in. Work as closely as possible with all the stakeholders, including your customers and other partners.


Reflect behavior and adapt to changes

The effect of a change in the end reflects on the behavior of individuals, tools, and functions. The EAM function shall atempt to understand the likely directions and behavior of such changes using techniques such as scenario analysis and change cases. This will help the EAM function to determine how best to embrace the change in terms of timing, approach and methodology. This is where a pattern based approach in developing the EAM function would facilitate change adoption with much ease and least impact.

EAM should manage and plan for the changes and shall never resist a change. It may not always be easy in embracing changes, but a well thought out EAM evolution lifecycle would certainly make it simpler. It is always possible that one big change can be broken into various blocks and can be taken one at a time, depending on the time, efforts and business priorities.


Here are some of the useful references for further reading on the Agility and the Enterprise Architecture Management.

1. Towards an Agile Design of the Enterprise Architecture Management Function

2. Principles for the Agile Architect

3. The Principles of Agile Architecture

4. Actionable Enterprise Architecture (EA) for the Agile Enterprise: Getting Back to Basics