How to Implement a Contingency Plan

A contingency plan is developed to prepare a business to face abnormal situations and mitigate the impact of sudden disasters. The plan outlines the procedure to be followed in the event of failure of one or more critical systems.

The implementation of a contingency plan depends upon the size of the organization and the resources available during the crisis. The plan should be designed, reviewed and accepted by the management. The plan should be shared with the key members of the organization. Companies should periodically execute the steps outlined in the plan as an exercise, to be prepared when the need arises.

The business should have a contingency team that takes over the operations and implements the plan for every type of risk identified. Equipment failure due to natural disasters and sabotage may be covered by insurance. The personnel implementing the contingency plan should be aware of the contact details of people or service providers to be reached during the emergency situation, to get assistance in fixing the issue and bringing the business operations back to normal.

Communication and notification is an important part of implementing a contingency plan. If a primary business location is affected by fire or flood, the plan might be to move the employees and equipment to another location. To implement this plan of shifting operations to a new location, there should be a good communication plan in place. If the problem arises during working hours, the evacuation procedure should be followed and emergency help lines should be used to secure help. The persons responsible for implementation of the contingency plan should be able to contact all employees by a previously agreed upon mode (telephone / e-mail / SMS) and inform them to report for work at the new location until the old one can be made functional again. External suppliers, distributors and customers should also be notified of change in location, and whom to get in touch with to resume operations and contact details.

Contingency planning is important while executing a project. If a key team member is rendered out of action, there should be another team member capable of stepping in to perform important tasks. If the project follows good knowledge sharing practices and has good documentation, it will facilitate induction of new support staff (developers / testers) for assistance. It is important to communicate to the client that the absence of the regular person will not affect the project delivery schedule. If the project runs into issues which are likely to affect budget or deadlines, the person(s) implementing the contingency plan should know what needs to be communicated to client. The person should also know how and when the information should be sent to convince the client that measures have been taken to mitigate the risks and bring the situation under control. The implementer should perform follow-ups and send status updates to keep the management and client informed during a problem situation.

Early warning systems should be in place to notify / escalate issues to the relevant person(s) in-charge. Analysis, assessment, co-ordination, prioritization and preparedness are the key elements for implementing a plan. Contingency plans should be periodically updated and the lessons learnt from every incident should be incorporated into the plan.

Security Jobs in the Gulf

The Deepwater Horizon rig explosion on April 20, 2010 has affected several habitats on the Gulf of Mexico. Yet, the impact of the oil spill cannot be calculated in its entirety, and time will reveal how the Gulf reacts to the accident. For the Gulf to recuperate, the coastal and marine ecosystems have to find balance once again, and the only way to do this is for the elements of each ecosystem to help with the repairs.

Coastal ecosystems, like mangrove forests, coral reefs, oyster beds and others environments, provide shelter to different types of organism and animals, like birds, and crabs. The seagrass meadows located on shallow waters provide food for manatees and turtles. Sargassum seas are floating seaweeds that provide nurseries and habitats for hundreds of species. All of these components possess inherent capabilities to cope with pollutants and waste, and turn them into useful resources, like food and refuge.

Marine ecosystems comprise the diverse environments that exist on the three main water layers or zones. The sunlit epipelagic zone is the upmost level, where plankton produces oxygen during photosynthesis, and nourishes many fish, crustaceans and mammals. On the mesopelagic zone that begins at 50ft. below sea level, coral reef provides shelter to different species of fish. These fish are essential to the rest of the food chain that lives down to about 650 ft. below sea level. Cold seep subsists from chemicals that ooze from them seafloor on the next layer, the bathypelagic zone. Caridean shrimp, mussels and octopus live off the cold seep, and blend with other animals capable of surviving in extreme cold and pressure, with virtually no light.

Toxins from the oil spill have spread throughout the three layers, altering micro fauna essential to the well-being of the Gulf. Additionally, heavier oil compounds can mix with other floating sediment and form tar-balls that can float to shore, or sink to the bottom of the ocean. Although there have been several attempts to clean the oil and thereby reduce the disturbances on the Gulf, many people wonder if these themselves pose a threat to marine life.

Perhaps the best policy in the Gulf would be to develop safety regulations for all companies to abide, as well as create security jobs to enforce regulations. Coast security guards could monitor the different rigs and wells not only physically but also remotely with the use of technology. Professionals who are hired for these jobs in security could work in conjunction with the coast guard, and develop a safety network with a broader range of resources and experience. In a sense, we could complement the ecosystems of the Gulf, creating better living conditions not only for humans but also for all sea life.

Focusing on Cloud Portability and Interoperability

Cloud Computing has helped us understand both the opportunity, and the need, to decouple physical IT infrastructure from the requirements of business. In theory cloud computing greatly enhances an organization’s ability to not only decommission inefficient data center resources, but even more importantly eases the process an organization needs to develop when moving to integration and service-orientation within supporting IT systems.

Current cloud computing standards, such as published by the US National Institute of Standards and Technology (NIST) have provided very good definitions, and solid reference architecture for understanding at a high level a vision of cloud computing.

However these definitions, while good for addressing the vision of cloud computing, are not at a level of detail needed to really understand the potential impact of cloud computing within an existing organization, nor the potential of enabling data and systems resources to meet a need for interoperability of data in a 2020 or 2025 IT world.

The key to interoperability, and subsequent portability, is a clear set of standards. The Internet emerged as a collaboration of academic, government, and private industry development which bypassed much of the normal technology vendor desire to create a proprietary product or service. The cloud computing world, while having deep roots in mainframe computing, time-sharing, grid computing, and other web hosting services, was really thrust upon the IT community with little fanfare in the mid-2000s.

While NIST, the Open GRID Forum, OASIS, DMTF, and other organizations have developed some levels of standardization for virtualization and portability, the reality is applications, platforms, and infrastructure are still largely tightly coupled, restricting the ease most developers would need to accelerate higher levels of integration and interconnections of data and applications.

NIST’s Cloud Computing Standards Roadmap (SP 500-291 v2) states:

“… the migration to cloud computing should enable various multiple cloud platforms seamless access between and among various cloud services, to optimize the cloud consumer expectations and experience.

Cloud interoperability allows seamless exchange and use of data and services among various cloud infrastructure offerings and to the data and services exchanged to enable them to operate effectively together.”

Very easy to say, however the reality is, in particular with PaaS and SaaS libraries and services, that few fully interchangeable components exist, and any information sharing is a compromise in flexibility.

The Open Group, in their document “Cloud Computing Portability and Interoperability” simplifies the problem into a single statement:

“The cheaper and easier it is to integrate applications and systems, the closer you are getting to real interoperability.”

The alternative is of course an IT world that is restrained by proprietary interfaces, extending the pitfalls and dangers of vendor lock-in.

What Can We Do?

The first thing is, the cloud consumer world must make a stand and demand vendors produce services and applications based on interoperability and data portability standards. No IT organization in the current IT maturity continuum should be procuring systems that do not support an open, industry-standard, service-oriented infrastructure, platform, and applications reference model (Open Group).

In addition to the need for interoperable data and services, the concept of portability is essential to developing, operating, and maintaining effective disaster management and continuity of operations procedures. No IT infrastructure, platform, or application should be considered which does not allow and embrace portability. This includes NIST’s guidance stating:

“Cloud portability allows two or more kinds of cloud infrastructures to seamlessly use data and services from one cloud system and be used for other cloud systems.”

The bottom line for all CIOs, CTOs, and IT managers – accept the need for service-orientation within all existing or planned IT services and systems. Embrace Service-Oriented Architectures, Enterprise Architecture, and at all costs the potential for vendor lock-in when considering any level of infrastructure or service.

Standards are the key to portability and interoperability, and IT organizations have the power to continue forcing adoption and compliance with standards by all vendors. Do not accept anything which does not fully support the need for data interoperability.