Architecture - Main
Architecture Guidelines
 Last Updated: March 29 2010

A solid, well thought out architecture is the key to a successful  production rollout of CA solutions (or any software solution for that matter). Planning a suitable physical architecture involves understanding the business requirements and the client current IT people/process/technology resources.

Every architecture is based upon a set of business requirements along with use cases and provable business value statements. An architect must first understand the business conditions before defining the physical architecture.

This section provides guidelines for understanding your current environment, and links to other sections that provide an overview of the architecture and tips for designing an appropriate solution based on those insights.  Here we assume business requirement analysis, business value statement definition, and "to be" people/process/technology state definition has been completed and focus on physical architecture. However, we will make references to business conditions that can impact architectural choices. Also included is an overview of several configuration options (both standard and advanced). Additional architectural insights can be found in the following sections:

One of the most significant changes for some CA products in recent years was the introduction of the MDB.  Click here for MDB-specific planning tips and considerations.

Before you can design a new Architecture, you must understand the current environment and the business needs driving the implementation.

Understanding the Current Environment

Before you can architect your deployment you need to understand the environment in which the deployment is to take place.  This includes the physical aspects of that environment (the network connections and machine capacity), the logistical aspects (where the machines are located and how they are connected), the operational requirements (security restrictions, naming conventions) and what business needs are expected to be met.  The "installation of Software Delivery" does not satisfy a business "need" nor is it a successful deployment in and of itself. The desire to manage software installations across multiple machines and multiple sites in a standardized manner is, more obviously, a "need" which can be met by the installation of Software Delivery.  Although this may seem like a matter of semantics, understanding the need that is driving an implementation will help you determine which components to implement and what customizations need to be made.  It will also provide you with an insight into client success criteria and which metrics to monitor.

The following topics will help you take this crucial first step in architecture planning:

Review the Existing Architecture

You will need to identify the following aspects of the existing architecture:

Keep in mind that, in addition to examining the existing components, you also need to research the future environment.   The addition of new machines, a change in network structure, operating system changes and even the implementation of other software solutions should all influence your architecture design.  For example, if many new workstations will be added, you may need to plan for an additional middle-tier manager or management point. If a server you are targeting for MDB installation will also be the target of several processor-intensive software installations, you may consider selecting a different server entirely. 

When considering the possibilities, remember to take growth into account. There are few things worse than having to make unexpected hardware and software purchases during or shortly after an implementation.

Hardware and Software

In  order to understand the physical architecture of your deployment environment you need to identify:

You need to understand both how the CA solution deployment will impact the current software applications and how those applications may impact the CA solution.  You also need to identify "mission critical" applications and machines - those machines, applications or functions which must be minimally, if at all, impacted. 

Knowing what is available is half the battle. Knowing the size requirements, as well as the capabilities of various hardware platforms will help in determining placement of the required "management points". Depending on which components or options you are deploying, it may be better to have fewer, but more powerful machines (e.g. database servers), or more, less powerful machines (e.g. software delivery). When it comes to aggregating "management points", certain components fit on the same machine better than others.

In planning your implementation you should allow for a lab environment in order to test your proposed configuration before it is deployed in your production environment.  The lab equipment should mirror, as much as possible,  the hardware and software specifications of that production environment.  This will enable you to identify any potential problems while the impact is minimal (or nonexistent).

In general - know the number of boxes, where they are located and the bandwidth available between the central and remote locations - this will become crucial in determining scalability

New or Used ?

While it is easier to architect a solution using all new hardware, which will only be used for the infrastructure, this can be an expensive option in a large-scale rollout. Where possible you should use existing hardware to limit the costs associated with purchase of new equipment. Remember that sizing considerations of the hardware will be much more complicated when other software will be running on the same hardware. In general, low-level functions can run on existing hardware, while it is better to have dedicated hardware for "enterprise-level" management functions. This is because, in normal usage, many more users will be connected, via GUIs, to higher-level management servers than are connected to a single low-level "management point." This enables us to more easily guarantee response times; where fewer independent functions are executing concurrently.

Growth

Be aware that the your environment is not static and, in most cases, will be evolving and growing. A large-scale implementation can take over a year to roll out, so you should understand what the environment will look like at least this far into the future, if not more. By utilizing a tiered architecture, change and growth is more easily handled. In a tiered approach, the environment is split into smaller groupings. For example, we might think of the architecture as having "levels" of responsibility.

If a new office is added, we simply replicate (sized appropriately) our office level architecture to the new office. The same is true for the regional offices. Spectrum supports multiple SpectroServers reporting to one or more enterprise level managers. NSM allows us to have multiple "tiers", or levels. CMS supports a two-tier or three-tier architecture with similar flexibility. ServiceDesk supports multiple-top tier managers as peers. Support of tiers by these (and other CA products) allows great flexibility in Architecting a solution.

Keep in mind that changes in size/scope of the current environment may require an adjustment to the architecture. For example:

Network Considerations

Of all environmental issues, the network layout is probably the most important to know. Implementations rely on the underlying Network infrastructure to function. You need to ensure that your implementation makes the most efficient and effective use of this precious resource. The more you know of both the physical, and logical layout of the network, the better.  It is certainly helpful to understand the limitations of the present network topology, and any planned actions to be taken.

A good example of this is a site that segments their network based on Production subnets, and Management subnets. Should a manager be placed on the Production or Management subnet? Without environmental knowledge, it would be hard to even ask the question. With a "distributed" Management system the answer to the question above is probably "both" !

Physical Topology

You must know the physical topology of the Network. This is obvious because, without this knowledge, you cannot realistically place management functionality correctly. You must also know the logical view of the network, in terms of LAN segments, etc. At one time, with "plain" Ethernet hubs, the physical and logical views were the same. With the advent of switching and routing they are not necessarily the same. You must ensure you have both layouts before continuing with an Architectural Design.

Once you have topology diagrams, you need to know the type and number of Servers, printers, workstations, etc., sit on the Network. An exact count is not required; a rough idea of numbers, purpose, and types will usually suffice. With this information you can start to picture what needs to be managed, and where the management components should reside. However, knowing that you have a user community, and some Servers somewhere out at the far reaches of your Network doesn't help you unless you have the bandwidth to effectively communicate with them.

Line Speed - Bandwidth and Latency

You will need to know the line speeds, as well as the type(s) of backbone which are in place, and/or planned. Client access times for end user solutions based on ServiceDesk or Clarity over slow latency links can be unacceptable and force either a network upgrade or use of a local server. You should never solely rely on available bandwidth numbers but always test actual network latency. A ten megabit connection with plenty of available capacity but a measured latency of 500 milliseconds will often be useless for infrastructure management. It can also elongate software delivery requests such that SLAs cannot be met. Given latency measurements you can make informed decisions about suggesting a network ugrade. There may be situations where it would be better to purchase another Management Server, than to upgrade the network, or to overload a WAN connection. This illustrates some of the types of "trade-off" decisions which must be made.

Again, the distributed nature of CA solutions allows us to create many different Architectures to provide the required solution with the desired availability, performance, and administration/operations characteristics. You need the client requirements and network topology input in order to choose the most appropriate architecture. You should use best practice based architectures whenever possible - using a proven architecture template dramatically reduces risks of poor performance, security issues, as well as the potential for instability.

Load Now and in the Future

The average and peak loads on the Network must be measured and that information used to ensure that the architected solution set will not adversely impact the network. This topic brings us back to the topology. To determine whether a connection is busy, you must know the underlying technology. Ethernet and ATM have widely different capabilities in terms of throughput in a "stressed" setting.

If a connection is under stress, are there plans to upgrade the connection, or to add another? Maybe it would be better to wait before implementing on that segment, rather than implement a "temporary" solution.

The more information, the better the decisions!!

Operating Standards and Restrictions

No planning can be complete without understanding existing operating policies.  This includes:

Now that you have gathered information regarding the current architecture, you need to verify that you have accurately understood the business requirement that is driving the implementation....

Understand the Business Needs

Just as important as understanding the physical architecture and operating standards of your implementation environment, is understanding the business needs behind the deployment.  These include

Key Drivers

The monitoring of hardware devices the agents running on those hardware devices is one part of an implementation. We must also consider the health and management of the applications running on these devices. The selection of which agents to implement on which devices should be determined, primarily, by the nature of the applications running on them, and their criticality to the enterprise.  Therefore, you will want to identify, from the onset:

Key Players

Depending on what solution is to be installed and implemented, you will need to identify:

Some CA Solutions span multiple organizations or client IT disciplines and you may find that there are subtle, yet critical differences in how different departments approach their IT requirements. 

Deliverables and Metrics

Deliverables should always be business-related. A measurement mechanism needs to be created for each deliverable, in order that the effectiveness of the solution can be measured. For example, in a Remote Control implementation, the Business requirement may be to "Decrease the time required to ascertain the cause of a user being unable to print by 40%". The business requirement is NOT "to put a Remote Control Agent on every desktop". That may be the solution, but it is not the business requirement. Without a metric to use as a measure, it will be hard to pinpoint exactly how the new implementation has improved your operations and what areas still need to be fine tuned.

Timeline

Before the deployment begins you need to verify the project timeline - does the solution need to be in place by a certain time?  are there restrictions with regard to the deployment schedule (e.g., certain times when production servers can be modified, or another project running in parallel that may impact your rollout). You must use this information and ascertain whether the solution can be realistically rolled out according to a desired schedule without compromising the implementation. Agreement on the implementation plan must be reached and agreed upon by all involved parties. A phased implementation preceded by Technical engineering and policy development in a lab environment is certainly recommended, if time allows. The availability/use of a test environment and a well defined change control process is always required for mission critical deployments.

Now that you have an understanding of what needs to be done and the environment in which it needs to be done, click here to take a look at the basic elements of typical (and some not so typical) NSM architectures.