Sunday, August 7, 2016

Distributed Ledger - Strengths That Warrants Its Adoption

Blockchain is the most talked about technology today that is likely to have a pervasive impact on all industry segments, more specifically in the Banking and Financial Services. Blockchain packs the principles of cryptography, game theory and peer-to-peer networking. Blockchain, once the formal name for the tracking database underlying the cyptocurrency bitcoin, is now used broadly to refer to any distributed ledger that uses software algorithms to record transactions with reliability and anonymity. An increasingly interesting aspect of blockchain use is the concept of smart contracts – whereby business rules implied by a contract are embedded in the blockchain and executed with the transaction.


Built on the peer-to-peer technology, blockchain uses advanced encryption to guarantee the provenance of every transaction. The secure and resilient architecture that protects the distributed ledger is on of its key advantage. The other benefits of block chain include reduction in cost, complexity and time in addition to offering trusted record keeping and discoverability. Blockchain has the potential to make trading processes more efficient, improve regulatory control and could also displace traditional trusted third-party functions. Blockchain holds the potential for all participants in a business network to share a system of record. This distributed, shared ledger will provide consensus, provenance, immutability and finality around the transfer of assets within business networks.


The Banking and Financial Services industries world over are seriously looking at this technology. The Central Banks in many countries including India have formed committees to evluate the adoption of the blockchain technology, which is expected to address some of the problems that the industry is wanting to overcome over many years. For the financial services sector blockchain offers the opportunity to overhaul existing banking infrastructure, speed settlements and streamline stock exchanges. While many institutions understand its potential, they are still trying to work out whether blockchain technology offers a cost-cutting opportunity or represents a margin-eroding threat that could put them out of business.


Like the Cloud Computing, there three categories of blockchain, public, private, and hybrid. A public block chain is a fully decentralized “trustless” system open to everyone and where the ledger is updated by anonymous users. A private blockchain finds its use within a bank or an institution, where the organization controls the entire system. Hybrid is a combination of both public and private implementations, which is open to a controlled group of trusted and vetted users that update, preserve, and maintain the network collectively. Blockchain exploration has propelled banks in multiple directions, from examining fully decentralized systems that embed bitcoin or other virtual tokens to function, to ones where only authorized and vetted users are granted ac-cess to a network. 


The technology is being commercialised by several industry groups and are coming out with the use cases that this technology will be suitable for across different industry vertical. With the surge in funding for the FinTech innovations, the block chain technology may find its retail and institutional adoption in about 3 to 5 years, while some expect that this will take even longer. Some have invested in in-house development, while others have partenered with others in their pursuit to adopt the blockchain as part of their main stream business technology. 


Listed here are some of the key strengths that drives the adoption of the technology worldover.

Trusted

With the frequency at which data breaches are happening, users are seeking to have control over sensitive data. Blockchain by its nature puts users in total control. Applied to payments, blockchain allows users to retain control of their information and enable access to information about only one act of transaction. Participants are able to trust the authenticity of the data on the ledger without recourse to a central body. Transactions are digitally signed; the maintenance and validation of the distributed ledger is performed by a network of communicating nodes running dedicated software which replicate the ledger amongst the participants in a peer-to-peer network, guaranteeing the ledger’s integrity. They will also want the ability to roll back transactions in instances of fraud or error – which can be done on blockchain by adding a compensating record, as long as there are permission mechanisms to allow this – and a framework for dispute resolution.

Traceability

The cryptographic connection between each block and the next forms one link of the chain. This link ensures the  maintenance of trace for the information flow across the chain and thus enabling the articipants or regulators to trace information flows back through the entire chain. The distributed ledger is immutable as entries can be added to, but not deleted from. This information potentially includes, but is not limited to, ownership, transaction history, and data lineage of information stored on the shared ledger.  If provenance is tracked on a blockchain belonging collectively to participants, no individual entity or small group of entities can corrupt the chain of custody, and end users can have more confidence in the answers they receive.

Resiliency

Operates seamlessly and removes dependency on a central infrastructure for service availability. Distributed processing allows participants to seamlessly operate in case of failure of any participants. Data on the ledger is pervasive and persistent, creating a reliable distributed storage so that transaction data can be recovered from the distributed ledger in case of local system failure, allowing the system to have very strong built-in data resiliency. Distributed ledger-based systems would be more resilient to systematic operational risk because the system as a whole is not dependent on a centralised third party. With many contributors, and thus back-ups, the ledger has multiple copies which should make it more resilient than a centralised database. 

Reconciliation

Use cases that centre on increasing efficiency by removing the need for reconciliation between parties seem to be particularly attractive. Blockchain provides the benefits of ledgers without suffering from the problem of concentration. Instead, each entity runs a “node” holding a copy of the ledger and maintains full control over its own assets. Transactions propagate between nodes in a peer-to-peer fashion, with the blockchain ensuring that consensus is maintained. Reconciling or matching and verifying data points through manual or even electronic means would be eliminated, or at least reduced, because everyone in the network accessing the distributed ledger would be working off the exact same data on the ledger. In the case of syndicated loans, This is more so, since information is mutualised and all participants are working from the same data set in real time or near-real time. .

Distributed

When a blockchain transaction takes place, a number of networked computers, process the algorithm and confirm one another’s calculation. The record of such transactions thus continually expands and is shared in real time by thousands of people. Billions of people around the world lack access to banks and currency exchange. Blockchain-based distributed ledgers could change this. Just as the smartphone gave people without telephone lines access to communication, information, and electronic commerce, these technologies can provide a person the legitimacy needed to open a bank account or borrow money — without having to prove ownership of real estate or meeting other qualifications that are challenging in many countries.


Efficiency Gains

Removal of slow, manual and exception steps in existing end-to-end processes will lead to significant efficiency gains. Blockchain also removes the need for a clearing house or financial establishment to act as intermediary facilitating quick, secure, and inexpensive value exchanges. Blockchain ensures the most effective alignment between usage and cost due to its transparency, accuract and the significantly lower cost of cryptocurrency transaction. Distributed ledger technology has the potential to reduce duplicative recordkeeping, eliminate reconciliation, minimise error rates and facilitate faster settlement. In turn, faster settlement means less risk in the financial system and lower capital requirements

Sunday, April 10, 2016

Economics of Software Resiliency

Resilience is a design feature that facilitates the software to recover from occurrence of an disruptive event. As it is evident, this is kind of automated recovery from disastrous events after occurrence of such events. Yes, given an option, we would want the software that we build or buy has the resilience within it. Obviously, the resilience comes with a cost and the economies of benefit should be seen before deciding on what level of resilience is required. There is a need to balance the cost and effectiveness of the recovery or resilience capabilities against the events that cause disruption or downtime. These costs may be reduced or rather optimized if the expectation of failure or compromise is lowered through preventative measures, deterrence, or avoidance.

There is a trade-off between protective measures and investments in survivability, i.e., the cost of preventing the event versus recovering from the event. Another key factor that influences this decision is that cost of such event if it occurs. This suggests that a number of combinations need to be evaluated, depending on the resiliency of the primary systems, the criticality of the application, and the options as to backup systems and facilities.

This analysis in a sense will be identical to the risk management process. The following elements form part of this process:


Identify problems


The events that could lead to failure of the software are numerous. Developers know that exception handling is an important best practices one should adhere to while designing and developing a software system. Most modern programming languages provide support for catching and handling of exceptions.  This will at a low level help in identifying the exceptions encountered by a particular application component in the run-time. There may be certain events, which can not be handled from within the component, which require an external component to monitor and handle the same. Leave alone the exception handling ability of the programming language, the architects designing the system shall identify and document such exceptions and accordingly design a solution to get over such exception, so that the system becomes more resilient and reliable. The following would primarily bring out possible problems or exceptions that need to be handled to make the system more resilient:


  • Dependency on Hardware / Software resources - Whenever the designed system need to access a hardware resource, for example a specified folder in the local disk drive, expect a situation of the folder not being there, the application context doesn't have enough permissions to perform its actions, disk space being exhausted, etc. This equally applies to software resources like, an operating system, a third party software component, etc.
  • Dependency on external Devices / Servers / Services / Protocols - Access to external devices like printers, scanners, etc., or other services exposed for use by the application system, like an SMTP service for sending emails, database access, a web service over HTTPS protocol, etc. could also cause problems, like the remote device not being reachable, or a protocol mismatch, request or response data inconsistency, access permissions etc. 
  • Data inconsistency - In complex application systems, certain scenarios could lead to a situation of inconsistent internal data which may lead to the application getting into a dead-lock or never ending loop. Such a situation may have cascading effect as such components will consume considerable system resources quickly and leading to a total system crash. This is a typical situation in web applications as each external request is executed in separate threads and when each such thread get into a 'hung' state, over a period, the request queue will soon surpass the installed capacity. 


Cost of Prevention / recovery


The cost of prevention depends on the available solutions to overcome or handle such exceptions. For instance, if the issue is about the SMTP service being unavailable, then the solution could be to have an alternate redundant, always active SMTP service running out of a totally different network environment, so that the system can switch over to such alternate service if it encounters issues with the primary one. While the cost of implementing the handling of multiple SMTP services and a fail-over algorithm may not be significant, but maintaining redundant SMTP service could have significant cost impact. Thus with respect to each such event that may have an impact on the software resilience, the total cost for a pro-active solution vis-a-vis a reactive solution should be assessed.

Time to Recover & Impact of Event


While the cost of prevention / recovery as assessed above will be an indicator of how expensive the solution is, the Time to Recover and the Impact of such an event happening will indicate the cost of not having the event handled or worked around. Simple issues like a database dead-lock may be reactively handled by the DBAs who will be monitoring for such issues and will act immediately when such an event arise. But issues like, the network link to an external service failing, may mean an extended system unavailability and thus impacting the business. So, it is critical to assess the time to recover and the impact that such an event may have, if not handled instantly.

Depending on the above metric, the software architect may suggest an cost-effective solution to handle each such events. The level of resiliency that is appropriate for an organization depends on how critical the system in question is for the business, and the impact of the lack of resilience for the business. The organization understands that the resiliency has its own cost-benefit. The architects should have this in mind and design solutions to suit the specific organization.

The following are some of the best practices that the architects and the developers should follow while designing and building the software systems:
  • Avoid usage of proprietary protocols and software that makes migration or graceful degradation very difficult.
  • Identify and handle single points of failure. Of course, building redundancy has cost.
  • Loosely couple the service integrations, so that inter-dependence of services is managed appropriately.
  • Identify and overcome weak architecture / designs within the software modules or components.
  • Anticipate failure of every function and design for fall-back-scenarios, graceful degradation when appropriate.
  • Design to protect state in multi‐threaded and distributed execution environments.
  • Expect exceptions and implement safe use of inheritance and polymorphism 
  • Manage and handle the bounds of various software and hardware resources.
  • Manage allocated resources by using it only when needed.
  • Be aware of timeouts of various services and protocols and handle it appropriately

Sunday, March 20, 2016

Big Data for Governance - Implications for Policy, Practice and Research

A recent IDC forecast shows that the Big Data technology and services market will grow at a 26.4% compound annual growth rate to $41.5 billion through 2018, or about six times the growth rate of the overall information technology market. Additionally, by 2020 IDC believes that line of business buyers will help drive analytics beyond its historical sweet spot of relational (performance management) to the double-digit growth rates of real-time intelligence and exploration/discovery of the unstructured worlds.

This predicted growth is expected to have significant impact on all organizations, be it small, medium or large, which include exchanges, banks, brokers, insurers, data vendors and technology and services suppliers. This also extends beyond the organization with the increasing focus on rules and regulations designed to protect a firm’s employees, customers and shareholders as well as the economic wellbeing of the state in which the organization resides. This pervasive use and commercialization of big data analytical technologies is likey to have far reaching implications in meeting regulatory obligations and governance related activities. 

Certain disruptive technologies such as complex event processing (CEP) engines, machine learning, and predictive analytics using emerging big-data technologies such as Hadoop, in-memory, or NoSQL illustrate a trend in how firms are approaching technology selection to meet regulatory compliance requirements. A distinguishing factor between big data analytics and regular analytics is the performative nature of Big Data and how it goes beyond merely representing the world but actively shapes it.


Analytics and Performativity


Regulators are staying on top of the big data tools and technologies and are leveraging the tools and technologies to search through the vast amount of organizational data both structured and unstructured to prove a negative. This forces the organizations to use the latest and most effective forms of analytics and thus avoid regulatory sanctions and stay compliant.  Analytical outputs may provide a basis for strategic decision making by regulators, who may refine and adapt regulatory obligations accordingly and then require firms to use related forms of analytics to test for compliance. Compliance analytics are not simply reporting on practices but also shaping them through accelerated decision making changing strategic planning from a long term top down exercise to a bottom up reflexive exercise. Due to the 'automation bias' or the underlying privileged nature of the visualization algorithms, compliance analytics may not be neutral in the data and information they provide and the responses they elicit.

Technologies which implement surveillance and monitoring capabilities may also create self-disciplined behaviours through a pervasive suspicion that individuals are being currently observed or may have to account for their actions in the future. The complexity and heterogeneity of underlying data and related analytics provides a further layer of technical complexity to banking matters and so adds further opacity to understanding controls, behaviours and misdeeds. 

 Design decisions are embedded within technologies shaped by underlying analytics and further underpinned by data. Thus, changes to part of the systems may cause a cascading effect on the outcome. Data accuracy may also act to unduly influence outcomes. This underscores the need to understand big data analytics at the level of micro practice and from the bottom up. 


Information Control and Privacy


The collection and storage of Big Data, raises concerns over privacy. In some cases, the uses of Big Data can run afoul of existing privacy laws. In all cases, organizations risk backlash from customers and others who object to how their personal data is collected and used. This can present a challenge for organizations seeking to tap into Big Data’s extraordinary potential, especially in industries with rigorous privacy laws such as financial services and healthcare. Some wonder if these laws, which were not developed with Big Data in mind, sufficiently address both privacy concerns and the need to access large quantities of data to reach the full potential of the new technologies.

The challenges to privacy arise because technologies collect so much data and analyze them so efficiently that it is possible to learn far more than most people had predicted or can predict . These challenges are compounded by limitations on traditional technologies used to protect privacy. The degree of awareness and control can determine information privacy concerns; however, the degree may depend on personal privacy risk tolerance. In order to be perceived as being ethical, an organization must ensure that individuals are aware that their data is being collected, and they have control of how their data is used. As data privacy regulations impose increasing levels of administration and sanctions, we expect policy makers at the global level to be placed under increased pressure to mitigate regulatory conflicts and multijurisdictional tensions between data privacy and financial services’ regulations.

Technologies such as social media or cloud computing facilitate data sharing across borders, yet legislative frameworks are moving in the opposite direction towards greater controls designed to prevent movement of data under the banner of protecting privacy. This creates a tension which could be somewhat mediated through policy makers’ deeper understanding of data and analytics at a more micro level and thereby appreciate how technical architectures and analytics are entangled with laws and regulations. 

The imminent introduction of data protection laws will further require organizations to account for how they manage information, requiring much more responsibility from data controllers. Firms are likely to be required to understand the privacy impact of new projects and correspondingly assess and document perceived levels of intrusiveness. 


Implementing an Information Governance Strategy


The believability of analytical results when there is limited visibility into trustworthiness of the data sources is one of the foremost concern that an end user will have.  A common challenge associated with adoption of any new technology is walking the fine line between speculative application development, assessing pilot projects as successful, and transitioning those successful pilots into the mainstream. The enormous speeds and amount of data processed with Big Data technologies can cause the slightest discrepancy between expectation and performance to exacerbate quality issues. This may be further compounded by Metadata complications when conceiving of definitions for unstructured and semi-structured data.  

This necessitates the organizations to work towards developing an enterprise wide information governance strategy with related policies. The governance strategy shall encompass continued development & maturation of processes and tools for data quality assurance, data standardization, and data cleansing. The management of meta-data and its preservation, so that it can be evidenced to regulators and courts, should lso be considered when formulating strategies and tactics. The policies should be high-level enough to be relevant across the organization while allowing each function to interpret them according to their own circumstances. 

Outside of regulations expressly for Big Data, lifecycle management concerns for Big Data are fairly similar to those for conventional data. One of the biggest differences, of course, is in providing needed resources for data storage considering the rate at which the data grows. Different departments will have various lengths of time in which they will need access to data, which factors into how long data is kept. Lifecycle principles are inherently related to data quality issues as well, since such data is only truly accurate once it has been cleaned and tested for quality. As with conventional data, lifecycle management for Big Data is also industry specific and must adhere to external regulations as such.

Security issues must be part of an Information Governance strategy whichwill require current awareness of regulatory and legal data securityobligations so that a data security approach can be developed based on repeatable and defensible best practices. 

Sunday, January 3, 2016

Enterprise Architecture - Guiding Principles

Enterprise Architecture (EA) artifacts must be developed with a clear understanding of how the EA will be used and who will use it. The EA may be used as a tool for evaluating design alternatives and selecting optimal solutions, as a guide providng insights into how practices will be streamlined or improved through automation or as a plan for needed investments and an understanding of what costs savings will be achieved through consolidation. Throughout, the people involved in the development and maintenance of an EA Framework shall consistently follow certain guiding principles, so that the EA contributes to the vision and mission of the enterprise. That makes the guiding principles of most important and mostly the first step in developing EA.


Enterprise architecture principles serve as a Framework for decision making by providing guidance about the preferred outcomes of a decision in a given context. This acts as a mechanism for harmonizing decision making across organization functions & departments in addition to guiding the selection and evolution of information systems to be as consistent and cost effective as possible. Alignment with enterprise architecture principles should be a goal for any initiative and will result in fewer obstacles, surprises and course corrections later in the project.


The usefulness of principles is in their general orientation and perspective; they do not prescribe specific actions. A given principle applies in some contexts but not all contexts. Different principles may conflict with each other, such as the principle of accessibility and the principle of security. Therefore, applying principles in the development of EA requires deliberation and often tradeoffs. The selection of principles to apply to a given EA is based on a combination of the general environment of the enterprise and the specifics of the goals and purpose of the EA. The application of appropriate principles facilitates grounding, balance, and positioning of an EA. Deviating from the principles may result in unnecessary and avoidable long-term costs and risks.


Typically there will be a set of overarching general principles and specific principles with respect to Business Architecture, Application & Systems, Data & Information, Security, etc. The following are some of the generic guiding principles that could be applicable to all enterprises.


Maximize Value

Architectures are designed to provide long-term benefits to the enterprise. Decisions must balance multiple criteria based on business needs. Every strategic decision must be assessed from a cost, risk and benefit perspective. Maximizing the benefit to the enterprise requires that information system decisions adhere to enterprise-wide drivers and priorities. Achieving maximum enterprise-wide benefits will require changes in the way information systems are planned and managed. Technology alone will not bring about change. To maximize utility, some functions or departments may have to concede their preferences for the benefit of the entire enterprise.


Business Continuity

As system operations become more pervasive, the enterprise become more dependent on them. This calls for ensuring reliability and scalability to suit the current and perceived future use of such systems throughout their design and use. Business premises throughout the enterprise must be provided with the capability to continue their business functions regardless of external events. Hardware failure, natural disasters, and data corruption should not be allowed to disrupt or stop enterprise activities. The enterprise business functions must be capable of operating on alternative information delivery mechanisms. Applications and systems must be assessed for criticality and impact on the enterprise's mission in order to determine the level of continuity that is required as well as on the need for an appropriate recovery plan.


Applications & Systems Architecture

Applications and Systems should be scalable to support use by different size organizations and to handle decline or growth in business levels. While the unexpected surge or decline in the volumes are to be handled, support for horizontal scaling is also essential. Enterprise applications should be easy to support, maintain, and modify. Enterprise applications that are easy to support, maintain, and modify lower the cost of support, and improve the user experience. Applications and Systems shall have the following characteristics: Flexibility, Extensibility, Availability, Interoperability, Maintainability, Manageability and Scalability


Legal and Regulatory Compliance

Information system management processes must comply with all relevant contracts, laws, regulations and policies. Enterprise policy is to abide by laws, policies, and regulations. This will not preclude business process improvements that lead to changes in policies and regulations. The enterprise must be mindful to comply with laws, regulations, and external policies regarding the collection, retention, and management of data.Education and access to the rules. Efficiency, need, and common sense are not the only drivers. Changes in the law and changes in regulations may drive changes in our processes or applications. Staff need to be educated about the importance of regulatory compliance and their responsibility to maintain it. Where existing information systems are non-compliant they must be strategically brought into compliance.


Leverage investments

All systems shall leverage existing and planned components, enterprise software, management systems, infrastructure, and standards. It is impossible to accurately predict everything upfront. A try before you buy approach validates investment plans, designs and technologies. Prototypes enable users to provide early feedback about the design of the solution. If the enterprise capability is incomplete or deficient, efforts will be made to address the deficiency as against duplicating or investing further in building such new capabilities. This will allow us to achieve maximum utility from existing investments.


Risk Based Approach to Security

Following a risk-based approach provides the enterprise with an opportunity to: Identify threats to projects, initiatives, data and the ongoing operation of information systems; Effectively allocate and use resources to manage those risks; Avoid unwarranted speculation, misinterpretation and inappropriate use; and Improve stakeholder confidence and trust. Information systems, data and technologies must be protected from unauthorized access and manipulation. Enterprise information must be safe-guarded against inadvertent or unauthorized alteration, sabotage, disaster or disclosure. The cost and level of safeguards and security controls must be appropriate and proportional to the value of the information assets and the severity, probability and extent of harm


Continuous Improvement

The rate of change and improvement in the worldwide information technology market has led to extremely high expectations regarding quality, availability and accessibility. As a result, ICT must deliver projects and service-level agreements (SLAs) on progressively shorter deadlines and information systems with increasingly higher quality in an effective cost-control manner. This demand requires an operating model that continuously reviews and improves upon current practices and processes. Routine tasks that can be automated should be, but only where the benefit justifies the cost. The complexity of the process, the potential time savings and the potential for error reduction should be factored into the benefit. Processes and tasks must be analyzed and understood to determine the opportunity for improvement and automation. Service outages, errors and problems need to be analyzed to understand and improve upon deficiencies in existing processes and practises. Manual integration, where data is copied from one information system to another by hand, should give way to automated processes that are repeatable, timely and less prone to error.


Responsive Change Management

Changes to the enterprise information environment are implemented in a timely manner. If people are to be expected to work within the enterprise information environment, that information environment must be responsive to their needs. Processes may need to be developed to manage priorities and expectations. This principle will, at times conflict with other principles. When this occurs, the business need must be considered but initiatives must also be balanced with other enterprise architecture principles. Without this balanced perspective short-term considerations, supposedly convenient exceptions and inconsistencies, will rapidly undermine the management of information systems.


Technology Independence

Business architecture describes the business model independent of its supporting technology and provides the foundation for the analysis of opportunities for automation. Eliminate technology constraints when defining business architecture and ensure automated processes are described at the business process level for analysis and design. Enterprise functions and IT organizations must have a common vision of both a unit’s business functions and the role of technology in them. They have joint responsibility for defining the IT needs and ensuring that the solutions delivered by the development teams meet expectations and provide the projected benefits. Independence of applications from the supporting technology allows applications to be developed, upgraded and operated under the best cost-to-benefit ratio. Otherwise technology, which is subject to continual obsolescence and vendor dependence, becomes the driver rather than the user requirements themselves.


Data is a Shared Resource

Timely access to accurate data is essential to improving the quality and efficiency of enterprise decision making. It is less costly to maintain timely, accurate data and share it from a single application than it is to maintain duplicate data in multiple applications with multiple rules and disparate management practices. The speed of data collection, creation, transfer and assimilation is driven by the ability of the enterprise to efficiently share these islands of data across the organizations. A shared data environment will result in improved decision making and support activities as we will rely on fewer sources (ultimately one) of accurate and timely managed data. Data sharing will require a significant cultural change. This principle of data sharing will need to be balanced with the principle of data security. Under no circumstance will the data sharing principle cause confidential data to be compromised.

The above is not an exhaustive list. The set of principles actually depends on the enterprise's vision and mission and as the EA is aligned to such vision and mission, the principles should also be formulated with alignment in mind. While the above principles are generic and may be used by all enterprises, it is important to state the principle in a structured manner. The principle shall be supported with a rationale, so that the users can understand, why this principle exist and to what extent the same can be traded-off when a conflict arise. 

Saturday, September 26, 2015

Teachability - a Significant Soft Skill for Leaders

"If You Want to Learn, Be Teachable" -- By John C. Maxwell

There is an old saying that “you can't teach an old dog new tricks,” but the concept called “teachability” remains a key component for ensuring that the professionals of all walks are successful in their pursuit. This is all the more important in the IT because the "Change" here happens at a faster pace and those being teachable get better in their career towards becoming a leader.


Today's educational methods and curriculum are designed with a basic assumption that the students are teachabile. But when teachers find few students who lack this skill, they get into frustration. This makes the gap between teaching and teachability widening. The teachability factor should form part of the early school curriculum, so that it pays the fruits as the student pass through the further stages of education. 


Today's kids are smarter and they are born with Smart gadgets and devices and they handle these devices far better than their grandfathers. But this smartness does not mean that they are teachable. Being teachable is closely related to adaptability and being Curious. To be teachable, one has to be: quick to learn and observe; take direction, advice, correction when you make a mistake, etc.; and learn from all of those. Both parents and the teachers should be trained to improve these teachability traits of the students right from the childhood.

The character of Teachability has two aspects to it; one is being a learner and the other is to pass it on, to share insights and what we have learned with others. It is first being a learner, absorbing and applying what one has come through, then replicating that in others. To be a person who can teach we have to be a person who is teachable. Being teachable is a choice. We choose whether we are open or closed to new ideas, new experiences, others’ ideas, people’s feedback, and willingness to change. The key to teachability is not just that we try ideas on for size, but that we actually learn from others and change our point of view, process, and future decision making based on the what we have learned.


We all know that "Change is the only Constant thing" and the change is happening every where. For IT, the change happens in a faster pace. Newer tools and technologies emerge quite faster, needing the IT professionals on the run to learn things continuously. One of the important characteristic required to adapt change is being Teachable. Today’s competitive advantage goes to those who can learn and adapt faster, which are the important traits of being teachable. 


The work and decision making enviornment is different across work places. One should be willing to adapt and learn to these changing enviornment and circumstances and simply put be teachable.

Here are the important traits of Teachability:


Conducive to Learning - Approach each day as an opportunity for new learning experience. Have open minded and listen to people. There is a certain learning opportunity from every person you meet. Teachable persons remain alert for new ideas and always expect something to learn in every problem they face. They know that success has less to do with possessing natural talent and more to do with choosing to learn.

Be a Beginner for ever - When people are actually beginners, they have the mind set to be trained and learn. But as we all know, once they get better in the subject and reap more and more successes, they tend to get carried away and get to a state of closed minded. To be teachable, one has to stay in the beginner's mind-set for ever. The more success you have, the harder it is to maintain the beginner's mind-set because you are much more likely to think you know the answer and have less to learn. Believing in and practicing the following will help one to keep the beginners' mind-set: everyone has something to teach me; every day I have something to learn; and every time I learn something, I benefit.

Reflect and Change -Becoming and remaining teachable requires people to honestly and openly reflect and evaluate themselves continuously. Any time you face a challenge, loss, or problem, one of the first things you need to ask yourself is, “Am I the cause?” If the answer is yes, then you need to be ready to make changes. Recognizing your own part in your failings, no matter how painful, and working hard to correct your mistakes, leads to the ability to change, grow, and move forward in life.

Inter-Personal Skill - Inter-Personal skill will help nurture the art of learning from perople around. Be open minded and freely speaking to those around you to openly, yet honestly share the facts of not only work but also personal life. This will help strengthen the relationship, being approachable with those around and thus help get honest feedback. This will also make them courageous and honest enough to speak freely. Be willing to accept such feedback and criticism.

Learn Unto Death - The secret to any person’s success can be found in his or her daily agenda. People grow and improve, not by huge leaps and bounds, but by small, incremental changes. Teachable people try to leverage this truth by learning something new every day. A single day is enough to make us a little larger or a little smaller. Several single days strung together will make us a lot larger or a lot smaller. If we do that every day, day upon day, there is great power for change.

Non-Defensive - After you receive any form of constructive criticism, think about it and decide how you will act differently in the future. Don't get defensive when called out. Instead, learn from it and improve, so you don't make the same mistake again. Many of these lessons will come from the school of hard knocks. A teachable person is non-defensive. When they are wrong they quickly admit their wrongdoing and seek to learn how to be better next time. A teachable person allows others to speak truths learned from experience into their lives. A teachable person does not make unilateral decisions but seeks wisdom and knowledge from multiple people.

As you would have observed, Teachability requires certain soft skills, which are not easy to acquire. Though this is not to "born-with" skill, one can put in efforts to become teachable. Most of the organizations today are considering soft skills as most valuable than the hard skills, because, hard skills can be acquired on the job, but soft skills are not as easy to acquire. Thougn many of the recruiters are looking for Teachability as a soft skill, they are certainly looking for the traits that form part of Teachability. Like for instance, for most of the recruiters, the above mentioned traits figure in their evaluation checklist.

John C Maxwell suggests the following to pursue Teachability:

Learn to Listen - As the old saying goes, “There’s a reason you have one mouth and two ears.” Listen to others and remain humble, and you will learn things that can help you expand your talent.

Understand the Learning Process - Act, Reflect, Improve and Repeat

Look for and Plan Teachable Moments - By reading books, visiting places that inspire you, attending events that prompt you to pursue change, and spending time with people who stretch you and expose you to new experiences.

Make your teachable moments count - Pay attention to:
  • Points they need to think about
  • Changes they need to make
  • Lessons they need to apply
  • Information that they need to share
Ask yourself, “Am I really teachable?” - Ask yourself the following questions:

  • Am I open to other people’s ideas?
  • Do I listen more than I talk?
  • Am I open to changing my opinion based on new information?
  • Do I readily admit when I am wrong?
  • Do I observe before acting on a situation?
  • Do I ask questions?
  • Am I willing to ask a question that will expose my ignorance?
  • Am I open to doing things in a way I haven’t done before?
  • Am I willing to ask for directions?
  • Do I act defensive when criticized, or do I listen openly for truth?
A "no" to one or more questions above would mean that you have something to work on.

Saturday, August 15, 2015

The Promise and Peril of IoT

The Internet of Things can be defined as below:
The Internet of Things (IoT) is the network of physical objects or "things" embedded with electronics, software, sensors and connectivity to enable it to achieve greater value and service by exchanging data with the manufacturer, operator and/or other connected.

As we can see today, there are many things that we use in our daily livelihood are becoming smarter as they have embedded sensors and related electronics and algorithms, so thay they collect data in real time and convert the same into useful information. The most common smart things that we see now range from tracking devices, cars, refridgerators, security cameras, ovens and even dustbins. The Healthcare industry is leading in adopting the IoT devices and we have devices which are worn under the skin, that on the positive side help address many of the health concerns.


The IoT ecosystem primarily has three things: the device itself, with necessary sensors to collect data; the network that the devices use to share the data with the back end systems; and the back end system which apart from applying various analytical and algorithmic processes on the collected data also manages the devices, like rolling out updates, patches, etc. Certain devices may not have the ability to connect to the internet, in which case, the devices reach out to the back end through intermediate broker devices, like smart phones.

IoT is here to Stay

More and more IoT devices are coming out and will soon be everywhere and experts predict that the number can grow to 50 billion by year 2020. The IoT will undoubtedly be beneficial, but not without any perils. The pervasive interconnectedness of the IoT devices will also help the businesses in better understanding customer behavior and adopt appropriate business and marketing strategies targeting the specific customers. While the businesses like healthcare service provicers may make the most out of this IoT push, it poses many concerns ranging from data security to life safety of those who either directly or indirectly use such devices.

As the benefits seem to outweigh the drawbacks, it is very likely that IoT is here to stay and the concerns have to be addressed as it matures in the coming years. Let us examine the Promises that IoT era is about to bring in and also the Perils that come along.


The Promise

Healthcare

As mentioned earlier, healthcare providers are among the earliest to adopt the IoT. The wider deployment of electronic medical records (EMRs) and deployment of telemedicine technology that relies heavily on the type of remote data collection needed IoT to take it further and this convergence is expected to fuel the growth of IoT. With IoT, patients can submit their vitals from home without having to personally visit their physician and thus experiencing an enhanced and timely care, which could be life saving many times. This also helps in healthcare providers innovate further and come up with preventive care plans. Typical IoT devices that we see now are the fitness trackers, smart watches and other wearable devices like smart shoes.

Automobile

Next to Healthcare, Automobiles makers have shown greater interest in leveraging the IoT and thus the cars are becoming smart with capabilities like driverless cars, parking assist, switching on the A/c remotely, etc. IoT, if not already, will enrich the in car experience of the driver and passengers. The applications include enhanced in-car infotainment, improved safety controls and improved remote maintenance. For example, the car tyres are getting smarter with the ability to notify the tyre pressure in real time and even extend it further to automatically inflate or deflate the tyre on the go. The cars rolling out today already have some level of smartness built in, giving an enhanced safety and driving experience.

Manufacturing

The IoT brings revolutionary changes to society, economy, and technology, in such a manner that no one can just ignore to leverage it for its benefits. Manufacturing companies for that matter are seriously working to leverage IoT to: gain enhanced visibility over the production process; link the production to the business processes; and build responsive monitoring processes that improves the efficiency and quality of the products and services. Application of IoT in the above areas will lead to significants benefits like, securing and monitoring the movement of goods within and outside the factory, improving the quality of the products, preventive maintenance and upkeep of the plant & machinery, etc. When implemented correctly in every stage of the manufacturing process, IoT will be a significant benefit to employees on the manufacturing floor to the shippers and finally to the customer.

Retail

Retail industry would not want to be left out in this race of adopting the IoT as it has the biggest potential to leverage for a better business results. Being in direct contact with the end consumers, retailers can make use of in-store sensors and can track smartphones throughout the store and record path-to-purchase data that can later be used to optimize store layouts. Check out process can be made easier with smart shopping bags, so that the moment an item is dropped into the bag, the same is added to the order making the billing process a lot easier. IoT is likely to be very useful in fraud prevention, like theft of inventory, etc. Early adopters will be positioned to more quickly deliver IoT-enabled capabilities that can increase revenue, reduce costs and drive a differentiated brand experience. The IoT will be a disruptive force in retail operations.

Other Benefits

Energy sector is adopting IoT with smart meters and grids to gather real-time data for remote monitoring of resource consumption, malfunctions, etc. Needless to mention, IoT enables buidling of smarter homes with smart-connected home appliances and thermostats giving an ability to the users to remotely monitor and manage. IoT is also entering our homes in the form of internet-connected lightbulb, thermostat, door lock, washing machine or oven you can control from inside or outside your house.  IoT has the power of transforming our lives by offering the needed sensing, connectivity and intelligennce to improve our wellbeing. 

Having seen the some of the promises, some of which are already real, let us now check out the dangers that come along.

The Perils
With IoT devices, consumers are often exposed to newer risks and concerns that these new generation devices and gadgets bring in. The concerns include their own safety, possible effects on networks used apart from the data protection and legal issues.

Another concern for the businesses is the amount of data produced by all IoT devices. The enormous data produced by various sensors must be transmitted over the networks, needing high performance networks and stored calling for the storage and related infrastructure. The volume of data managed by enterprises between 2015 and 2020 is expected to grow 50 times year-over-year. The concern is not just on the volume, but also on the quality and security of the data. The legal issues around the data ownership, accountability and responsibility cannot be ruled out as well.

Security & Privacy

IT professionals are no longer just protecting data, circuits, and transmissions, but need to focus on the relationships between “things”, “service to things” and “things to people.” Safety must be ensured along with availability, confidentiality and integrity. IoT devices might expose vlunerabilities, exposing an easy way for hackers to get into networks and databases of personal data. While manufacturers are responsible for the security of their products, organizations and end users are equally responsible deploying and monoitoring within their network. 

The ways and means of securing IoT is unclear as the industry is still evolving with thousands of start ups coming with cheaper and basic connected devices, ignoring security and safety in mind. The concerns around security and privacy stems out basically at three levels. The first being from the device itself. The device containing sensors to gather data and to perform certain actions should have a mechanism securely identify and authenticate the host system, so that it respond to the authorized hosts only and not to any. The second being the network used for sending and receiving data. Most of the IoT devices use the wireless protocols like bluetooth, to reach out to an intermediate device for further connectivity with internet. Securing these networks is very important as well to ensure data protection. The third is the Back End, where the huge volume of data gathered are stored for making it into more meaningful information for further actions.

The Internet of Things can be a complex market with multiple nodes, and businesses should aim to simplify this process. There’s no better way to assure a customer of the simplicity and security, than communicating regularly. It might seem like a rudimentary thing to do, but the true test of a successful business is to ensure that there’s a process in place amidst all that clutter. 

Other Concerns

Today's connected cars contain a multitude of computers collecting data, from driving habits to location data to media or entertainment use. With connectivity, data collected by the vehicle’s computers are sent to a manufacturer or third-party and data is received as well in the form of command & control or as updates to the programs & algortihms. In addition to privacy concerns, these technologies potentially allow hackers to remotely access a vehicle’s control systems and thus impact the safety of the human life

The consumer behavior is being used to the advantage of the retailers. For example, your trousers might get horrified by your weight gain and in turn will have the TV showing contextual ads about new fad diets, the fridge selling you low-fat yogurt, etc.

By getting smarter, the things get expensive with a shorter life span. For instance, your mattress may not need replacing every couple of years, but the smart mattress with a sensor inside may need a maintenance and replacement sooner than that. For cheaper connected devices like the kettle, toaster, waist belt, light switches and door knobs; expect replacement of these components to become a new, regular expense.
The current generation kids are born with smart devices on hand and are extremely addicted to digital gadgets and the smartphone notifications keep them busy staying away from in-person socilaization, leading up for a complete digital burn-out. 

Friday, June 19, 2015

Information Security - Reducing Complexity


Change is constant and we are seeing that everything around us are evolving. Primarily, the evolution is happening on the following categories:

Threats:

There is a drastic change in the threat landscape between now and the 1980s or even 1990s. Between 1980 and 2000, a good anti-virus and firewall solution was considered well enough for an organization. But now those are not just enough and the hackers are using sophisticated tools, technology and sills to attack the organizations. The motive behind hacking has also evolved and in that front, we see that hacking, though illegal is a commercially viable profession or business. 

Compliance:

With the pace at which the Threat landscape is evolving, governments have reasons to be concerned much as they are increasingly leveraging the technology to better serve the citizens and thus giving room for an increased security risk. To combat such challenges, Governments have come up with regulatory compliance requirements making it even complex for the CSOs of enterprises.

Technology:

Technology is evolving at a much faster pace and as we are experiencing, we are seeing that the things around us are getting smarter with the ability to connect and communicate to internet. On the other side, considerable progress have been achieved in the Artificial Intelligence, Machine Learning, etc. These newer ‘smarter things’ are adding up to the complexity as the CSOs of the have to handle the threats that these bring on to the surface.

Needless to mention that the hackers too make the best use of the technology evolution and thus improving their attack capabilities day by day.

Business Needs:

The driver of adoption of these evolution is the business need. As businesses want to stay ahead of the competition, they leverage the evolving technologies and surge ahead of the competition. With a shorter time to market, all departments, including the security organization should be capable of accepting and implementing such changes at faster pace. Due to this time pressure, there is a tendency to look for easier and quicker ways to implement changes ignoring the best practices.


Consumerization

IT today is to simplify things to the consumers within and outside the organization and this raises the user expectation and thus leading to too many changes with some being unrealistic as well. This may include the users bringing their own anything (BYOA). This will soon include Bring Your Own Identity with chips implanted under the skin. As you would know, employees who work at the new high tech office campus in Sweden, EpiCenter can wave their hands to open doors, with an RFID chip implanted under the skin.

Connected world

Most enterprises are now connected with their business partners in terms for exchanging business data. With this the IT System perimeter extends to that of the partners’ as well to some extent. Rules and polices had to be relaxed to support such connected systems. Now that we are looking at things that we use every day will transform as connected things, adding up to the complexity.

Big data

Basically the need for big data tools to handle this. While this complexity did exist earlier, the attacks were not that sophisticated then. Today with the level of sophistication on the attack surface, the need for simplifying complexity of handling huge data is very much required.

Skillset

The threat landscape is widening and the attacks are getting sophisticated, which call for even better tools and technologies to be used to prevent or counter them. This means that there is a continuous change in the method, approach, tools and technology used, making it difficult to maintain and manage the skills of the human resources.

Application Eco System

A midsized organization will have hundreds of applications, needing to have different exceptions to the policies and rules. These applications may in turn use third party components and thus the chances of a vulnerability within these applications is very high. Given that these applications constantly undergo change and evolve, there is a possibility that the code or component left behind might expose a vulnerability.


How does this impact

Complexity impacts the security capability in many ways and the following are some:

Accuracy in Detection

The complexity makes the detection of a compromise difficult. Having to handle and correlating large volume of logs from different devices and that too different vendors will always be a challenge and this makes timely and accurate detection a remote possibility. A successful counter measure require accurate detection in the pre-infection or atleast in the infection stage. The later it is detected, it is complex to counter the same.

Resources

Each new security technology requires people to properly deploy, operate and maintain it. But it is difficult to add new heads to the Security Organization as and when a new tool or technology is considered. Similarly, managing the legacy solutions put in by older employees who are no longer employed in the organizaiton is likely to remain untouched due to the fear of breaking certain things.

Vulnerabilities and Exposures

With the huge number of applications used by the enterprise, this is a complex and huge exercise, unless the same is integrated into the build and delivery process by mandating a security vulnerability assessment. With innumerable number of applications, components, and the operating systems connecting to the enterprise network, this is almost impossible. Needless to mention that with the wearables and other smarter things connection to the network, who knows, what vulnerability exist in such smarter things and in turn exploited by hackers.

Methods for reducing complexity

Complexity is certainly bad and reducing complexity will beneficial both in terms of cost and otherwise. However, simplification by any means should not result in compromising the needed detection and protection abilities. A balanced approach is necessary so that the risk, cost and complexity are well balanced and beneficial to the organization. The following are some of the methods that may help reduce the complexity:

  • Integrated processes as against isolated security processes. Every Business process should have the security related processes integrated within, so that every person in the organization will by default contribute towards security. The security process framework shall be designed in such a manner that it evolves over a period based on experience and feedback.
  • Practicing Agile approach within the security organization, so that the complexity is hidden within tools and appliances by automating the same. Agile approach also helps the security organization to embrace changes faster, especially, when implementing changes in response to a detected threat or compromise. One has to carefully adopt such practices into the Security framework.
  • Outsourcing the security operations to Managed Security Service Providers(MSSP) is certainly an option for small and medium enterprises that brings takes some of the complexity away and thus benefits the organization. Needless to mention here that outsourcing does not absolve the responsibility of the security organization from any security incident or breach.
  • “Shrinking the Rack” – Consolidating technologies whereby devices combining multiple technology and capability within it may make it easier for deployment and administration. At the same time this has the risk of ‘having all eggs in one basket’, i.e. when such a device or solution is hacked, then it is far and wide open for the hackers.
  • Mandating periodical code, component and process refactoring, where by unneeded legacy code, component and process are periodically reviewed and removed from the system. This will help keeping the applications maintainable and secure. Also implant security as a culture amongst all the employees, so that they handle security indicators responsibly.

Saturday, May 23, 2015

Factors Affecting Software Resiliency

The digital transformation is happening everywhere right from small private firms to government organizations. On the personal front, connected things is coming on, where by every thing that we have or use will be smart enough to connect and communicate with other things(systems). This in effect means there will be an increased reliance on IT systems to accomplish various tasks. This will call for high order of resilience on the part of such systems and the absence of which may lead to disasterous situation.

As we all know, the word resiliency means 'the ability to bounce-back after some events'. In otherwords, it is a capability of withstanding any shock or impact without any major deformation or rupture. In software terms, resilience is the persistence of the avoidance of failures when facing a change or in a deviated circumstance.

To design a resilient system, one should first understand the various factors that work against the resiliency. Here are some such factors:


Design Flaws

Design and Architecture of the systems is a major factor that works in favor or against the resiliency requirement. The architects shall while designing the system or solution should have a good understanding of what could go wrong and provide for an exception handling ability, so that all exceptions are appropriately handled, making the system not to go down and instead recover from such exception and continue to operate. The architects have many options today in terms of tools, technologies, standards, methodologies and frameworks that help buidling resiliency within. It is the ability of choosing the right combination of tools, technologies, etc for the specific systems that will decide on the resilience capability of the system. 


Software Complexity

The size and complexity of software systems is increasing, thus the ways in which a system can fail also increases. It is fair to assume that the increase in failure possibilities does not bear a linear or additive relationship to system complexity. Typically, the complexity of the software systems increases as it evolves by responding to the changing business needs. This is more so as the tools and technologies used to design and build the software are becoming outdated, making it difficult in maintaining the systems. 

This complexity attribute makes it increasingly difficult to incorporate resiliency routines that will respond effectively to failures in the individual systems and in their complex system. The cost of achieving an equivalent level of resiliency due to the complexity factor should be added to that of the individual systems

Interdependency and Interconnectivity

We are living in a connected world and systems of many of today's businesses depend on connectivity with their partner entities to do their business. This adds multiple points of failures over and above the network connectivity. The system resiliency is increasingly dependent on the resiliency of systems different other organizations over which the entity has no control. This means that a failure or outage of a business partner's system can have a ripple effect. This situation requires the systems need to be aware and capable of such failure or outage with other connected systems and the ability to recover from such events should be designed within. 

Rapid Changes

Thanks to the evolving digital economy, the business needs are changing too frequently and thus needing system changes. Every change in an existing system, for sure will add a bit of complexity, as the architecture on which the system originally designed wouldn't have considered the changes that are coming through. Many a times, considering the time to market, such changes need to be implemented quicker than expected, leaving the software designers to adopt a quick and dirty approach to deliver the change, leaving a permanent solution for a later time period. The irony is that there will never be a time when the 'permanent solution' is implemented.

Change is one of the key source of adding complexity to the Software systems. However, the evolving tools, technologies and methodologies come to the rescue, so that the Architects design systems and solutions in such a way to pave way for embracing such changes and to embed the resiliency factors in the design.

A frequently held criticism of Common Criteria testing is that, by the time the results are available, there is a good chance that the tested software has already been replaced. The danger here is that the new software may contain new vulnerabilities that may not have existed in prior versions. Thus, determining that an obsolete piece of software is sufficiently resilient is not particularly indicative of the state of the newest version and, therefore, is not very useful

Conclusion

Higher levels of resilience can be achieved by leveraging Machine Learning and Big Data tools and techniques. As the world is moving towards more and more connected things, high order of resilience is critical. With Machine Learning capability, the systems and devices can be embedded with algorithms that make them learn from past events and the data collected from various other connected networks and systems in addition to the ambient data. The systems can be designed to predict the health of various underlying components and thus its own health as well. Based on such prediction, the components may choose to use alternate approaches, like using alternate network protocols like Wireless, Bluetooth, etc, or choose to connect to a different component or system altogether.

Sunday, February 1, 2015

Evolution of Wearables - What is in store?

Many of us are hearing more and more about fitness bands and some are using these. Big players are now rolling out smart watches, which has disrupted the basic fitness bands considerably in a very short span of time, as these smart watches have these basic fitness features within. Wearables like, glasses, jewellery, headgear, belts, armwear, wristwear, legwear, footwear, skin patches, exoskeletons and textiles, etc are also increasingly becoming "Smart". These emerging smart devices can be worn by human beings, which will collect various data based on embedded sensors and provide useful information that will help improve oneself, which could be on physical fitness, health, etc.

As one can understand, wearables is not just limited to the gadget that decorate your wrist and the number of wearable devices in different segments are growing very fast. With rapid evolution around this space, there are devices that are worn around different areas of the body and the following graphic shows the smart devices that are worn in different parts of the human body:



Who are at it?

Amongst many others, companies like Google, Samsung, Fitbit, Jawbone, GoQii, LG, Sony have been into Wearable devices and the competition is heating up as big players like Intel and Apple are betting big on this market.

Fitbit dominated the market for “basic bands,” according to Canalys’ market estimates, with more than 50 percent market share in the second half of the year. The Jawbone UP came second, cutting itself around a fifth of the pie, followed by Nike with its Fuelband.

The market forecast and the trend makes us feel that this wearable space could potentially disrupt many of the traditional devices. Thus many are looking at embracing this market either to see how this could disrupt their product line or to see if they have an opportunity in this space.

NeuroMetrix of Waltham will be jumping into the market for wearable electronic devices. But the company's new Quell device - an over-the-counter version of its Sensus device for management of chronic pain - is an actual medical device that is used to manage pain.

TomTom, the Dutch brand known for its standalone GPS navigators among other things, has brought its line of sports watches to India. TomTom launched four fitness wearables, which include TomTom Runner and Multi-Sport GPS watches, which deliever real time stats such as time, distance, pace, speed and calories burnt to runners, swimmers and cyclists.

Xiaomi said in a press release that local sales of its Mi Band - a fitness tracking bracelet that can be powered for 30 days on a single charge, has surpassed 100,000 units since it was unveiled. The Beijing-based company forecast that more than 500,000 Mi Bands will be sold in Taiwan by the end of the year, giving it the biggest share of the country's wearable device market that is currently led by Sony Corp. and Samsung Electronics Co.

Intel is firing on all cylinders to expand into the growing wearable technology arena such as smart watches and other Internet-enabled wearables. This investment in Vuzix Corporation is yet another effort by the chipmaker in this regard. Intel has unveiled Curie, a low-powered module no bigger than a button, as part of its vision to lead in the wearables field.

Rumors have said that HTC will be launching a smartwatch at the upcoming CES. The initially planned unveiling of the device was back in October, but the date was pushed back to CES 2015. Details of the device are unclear though, as it could be a smartwatch or a fitness tracker.

In addition to all these devices, there will also be wearable technology focusing on health and fitness, prosthetics and smart clothing.

The Trend

Shipments of smart wearables are expected to grow from 9.7 million in 2013 to 135 million in 2018, according to CCS Insight's new global forecast. The forecast predicts that wrist-worn devices will account for 87% of wearables to be shipped in 2018 — comprising 68 million smartwatches and 50 million smart bands with no screen or with a minimal, one-line display.

The smartwatch will be the leading product category and take an increasingly large share of wearable shipments. We estimate smartwatch shipments will rise by a compound annual rate of 41% over the next five years. Smartwatches will account for 59% of total wearable device shipments this year, and that share will expand to just over 70% of shipments by 2019.

The dominant sector will remain the healthcare sector which merges medical, fitness and wellness. It has the largest number of big names such as Apple, Accenture, Adidas, Fujitsu, Nike, Philips, Reebock, Samsung, SAP and Roche behind the most promising new developments.

Google's Android could be critical for developing the smart devices ecosystem, though significant changes will be required before it is suitable for all kinds of wearable devices. Google has already released Android Wear, targeted for smart watches.

Samsung, Google, Apple, with their massive war chests, have come into this market. They’re going to really help elevate the category for consumers. They’re going to help people understand the kinds of benefits that they can get from these products. The next few years, will see activity trackers with a little bit more biosensing data, and smart watches that people are going to have to charge every night.

If Wearables 1.0 was about creating the basic technologies for the wearable devices, Wearables 2.0 was and still is about crafting rich, robust business models based on these technologies. Wearables 3.0 will be all about perfecting, expanding and engaging customers at a level never experienced before. Big players in Wearable Technology and Internet of Things, from healthcare companies to insurance corporations, from high street retailers to music industry, Google, Apple, Samsung, Mercedes, Nike, Audi, just to name a few are all to give for free their devices in exchange for data.

What could be the future?

Though it’s easy to be pessimistic, one cannot ignore the potential that this market has in store. In any event, while we wait for this category to evolve, it’s entertaining to watch the puzzle pieces slowly come together. Convergence is expected, in much the same way that the smartphone extended the basic functionalities of the feature phone and disrupted certain traditional devices like point and shoot camera.

Medical and Wellness segment could be the one which will embrace this category of wearable devices and make health more affordable and self manageable for every one. For instance, one can wear a virtual doctor while on a specific treatment. A better example could be that the advances in wearable devices could lead to a scenario, where a diabetes patient may get appropriate doses of insulin administered into his body automatically based on various data collected by the sensors worn around the body. This could be risky, if the data, so collected are inaccurate and that is one of the major concern that is expected to be addressed in the coming years.

There has to be a marriage of fitness devices and medical management devices to really impact patient health. The future of wearable technology in fitness and health isn’t about the fitness bands and health monitors – it’s about what can be done with the data they collect, which means that these devices have to be supplemented by smart applications that are powered by big data and analytics tools.

A very large percentage of the population already owns a smart phone, which has lot many capabilities, including that of the basic wearable devices. As such, it will be critical that wearables provide a distinct value proposition that is separate and different than the smartphone, although the smartphone will likely still act as the “hub” to collect information.

We’re already starting to see sensor-embedded running vests and smart socks. But we could soon see jackets with solar panels (to recharge your gadgets on the go), 3D printed dresses that everyone can afford, health-monitoring underwear, even clothes that react to light. If we had the ability to change the look of all of our clothes, just by fiddling with our phones, it would mean less spending on new gear and plenty of spare wardrobe space.

Wearables need to move beyond the gamification of fitness to focus on monitoring and improving our health. With extra sensors and smarter and reliable algorithms, future devices should be able to warn us of high blood pressure and dehydration, fatigue and stress. Perhaps then, forewarned by data we understand, we’ll find wearables more compelling.

In Wearable Tech 3.0 Security is paramount. Six months from now and we’ll understand how poor the wearables 1.0 security was, if any! The big players in this market should finally draw, define and release the IoT and Wearables industry Security Standards. Wearable Tech 3.0 is the beginning of a new era where enterprises provide real value to their customers, a key technology benefit in the age of the customer.

Thursday, November 6, 2014

Enterprise Architecture Practice - Capabilities

Enterprise Architecture (EA) function now have an unprecedented chance to lead the way in identifying new business opportunities, thanks to the innovations in the web and mobile technologies and businesses realizing the business advantages of such advancements. EA serves a strategic business purpose by enabling business capabilities to be implemented via IT architecture and related IT delivery processes.

Though Enterprise Architecture is not a very new practice, the maturity level is still not the optimal in most enterprises. Seeing the benefits that the EA function can bring to the table,  many enterprises are attempting to setup the EA practice within, but are in fact struggling to get it right. EA not just science and not just art as well. It is a combination of art and science. Successful EA practice has been found to being able to demonstrate certain key capabilities. In the EA world, there is no such thing as 'one size fits all', as it is highly dependent on the enterprises' business, its objectives, goals, strategies and priorities, which is never the same across enterprises.

While the objective of this blog is to discuss about the key capabilities that the EA function should be able to demonstrate, it is also good to highlight out what EA is not.

What EA is not:
  • EA is NOT a project
  • EA is NOT about review 
  • EA is NOT a one-time activity
  • EA is NOT for IT
  • EA is NOT a strategy
  • EA is NOT all about cost-reduction
  • EA is NOT one-man show

A successful EA practice should consider practicing and demonstrating the following key capabilities:

Staying Relevant

As we all know, it is highly unlikely that an architectural solution that works well for one enterprise will work well for another in the same industry domain. This is because each enterprise has its own vision and mission to win over the competition and constantly wish to stand alone in the crowd in certain key areas. Staying relevant helps the EA function in aligning strategic and operational views of business with the underlying technology and service delivery processes. For this reason, the EA practice should strive to understand the vision, mission and strategies of the enterprise and continue to stay aligned to the same, so that the architectural solutions continue to stay relevant for the enterprise.

Technology & Architecture Vision

No doubt that modern enterprise largely depend on technology and in certain cases, the business in fact is driven by technology. Irrespective of whether technology drives the business or not, technology is a key enabler of the business. So, it becomes essential to have a technology vision, which is aligned to the business vision. It is needless to mention that having a vision will not be just enough, and the same shall be driven down to the operational processes and practices. Every architecture and governance process should derive the technology vision as envisaged and so the solutions continue to stay relevant and yield the intended results. The technology vision and strategy shall be such that leverages both new tech innovations and existing capabilities that will enable the business to achieve the target state. 

The goal of the architecture vision is to articulate how the proposed architecture will enable the business goals, respond to the strategic drivers, conform to the principles, and addresses the stakeholder concerns and objectives.

Transforming and automating operations

While leveraging the existing knowledge and resources is key in saving costs, it is important for the EA function to stay on top of the technology and business innovations and explore opportunities of leveraging the same so that the enterprise stays on course of achieving its target mission and vision. This is where the EA teams should consider leveraging Agile approaches, so that the target reference architecture also stays dynamic and relevant. The EA framework shall have an evolution cycle, so as to improve the framework itself and similarly the architecture solutions should also be continually evolved based on feedback and availability of enabling technologies and innovations.

It is needless to mention here that the EA function shall equally consider the 'Business As Usual' as any transformational initiative should not derail the enterprise from achieving its intended mission and vision.

Being the Change Leader

EA is all about bringing change for the good. i.e. EA programs is all about driving the enterprise from its current state to the target reference state, which is nothing but identifying and driving changes to various resources at various levels, so that the target state is achieved. This is yet another key capability that come down to the old adage of building “better, faster, cheaper” systems that provide agility to change or expand capabilities, in response to ever-changing business requirements. EA function leads the planning for these new system and technology capabilities, ensuring the best solutions to the business requirements by providing blueprints and implementation road maps to the design and delivery teams. They also provide a service to the other organizational functions by ensuring compliance of these solutions at critical design and delivery milestones.

Mitigating risk

As the emphasis shifts from cleaning up the legacy of systems and technologies to better planning and governance of new IS and IT initiatives, we see a corresponding shift in the role of the EA practice. The focus shifts from driving out costs to reducing risks associated with new programs, while ensuring timely delivery of new capabilities. 

Every architectural initiatives shall be subject to a risk review and decisions shall be made based on the business value expected out of it. The changing business and regulatory conditions might also impact the solutions and at times could end up the enterprises not being able to realize the intended value out of it. This where the "Fail Fast" approach would help in making the right decisions. Periodic reviews of the change or transformational projects should be conducted with a view to ascertain whether the intended value is not impacted with the current conditions. Thus being able to manage and mitigate the risks well is a key capability that the EA practice should demonstrate.

Overseeing investments

It is natural for enterprises to look for Return on Investments (RoI), as the capital has a cost. The EA practice shall consider the cost of capital and the investment requirements for various change initiatives and work with the related other functions to ensure that the benefits are quantified so as to ensure the investments yield desired returns. In cases where the benefits are not directly quantifiable, the EA team shall identify such indirect benefits derived out of such investments and shall ascertain the monetary value in a best possible manner. 

Governing the architecture

As said earlier, EA function is not a project and it is a continuous function. EA function shall put in place necessary framework to monitor and manage the architectural activities in a constant basis. Business architects in the EA function monitor the project portfolio, while IT architects govern technology solutions, leveraging reference architectures to build the future state in alignment with strategic road maps. The governance principles shall be applied to various architecture activities with an objective to ensure the strategy alignment, risk management, measuring & monitoring, optimal resource utilization.

Integrating people, processes, and technology
Considering the innovation around the areas of web, mobile, big data powered by social media, modern enterprises are looking forward to leverage these to derive maximum business value. In this direction, to stay competitive and relevant to the customer business, most successful organizations are rapidly moving towards the system of engagement architecture supported by digital collaboration platforms and social strategies devised by EA where EA would create an effective social governance model and an overall enterprise strategy. It necessitates a pervasive social layer that spans many different system of records and departments within an organization. Discussion would also enlighten more focus on expanding social footprint by delivering consistent digital experience and utilizing social content and online communities to increase collaboration with customers and other stakeholders.