The purpose of the following use cases is to get you thinking about your CA SiteMinder® architecture in terms of high availability and performance. The use cases begin with a simple deployment and progress into more complex scenarios. Each case is based on the idea of a logical "block" of CA SiteMinder® components and illustrates how an environment can contain multiple blocks to address the following architectural considerations:
Extrapolate the necessary infrastructure from these cases to:
The simplest CA SiteMinder® deployment requires one "block" of components. A block of components is a logical combination of dependent components that include:
You protect web-based resources by deploying at least one block.
The following diagram illustrates a simple deployment:
Each component has a specific role with resource protection.
Note: For more information about the primary purpose of each component, see CA SiteMinder® Components.
You can extend the functionality of a simple deployment through the use of optional CA SiteMinder® components. The decision to implement optional components is determined by the CA SiteMinder® features your enterprise requires. For example:
The following diagram illustrates the optional components and their required dependencies:
Each component has a specific role in resource protection.
Note: For more information about the primary purpose of each component, see CA SiteMinder® Components.
You can extend the functionality of a simple deployment your environment to protect resources that do not reside on a Web Server. For example, if your environment hosts resources on an:
The following diagram illustrates optional Agents:
Each component has a specific role with resource protection.
Note: For more information about primary purpose of each component, see CA SiteMinder® Components.
The following use cases show how you can implement multiple blocks of components to build redundancy and failover into the environment using the following methods:
You can implement multiple blocks of components to build redundancy and failover into the environment using CA SiteMinder® round robin load balancing. This use case builds on a simple deployment to explain how you can begin thinking about operational continuity. The following diagram illustrates:
`
Each component has a specific role with resource protection.
Note: For more information about the primary purpose of each component, see CA SiteMinder® Components. For more information about CA SiteMinder® redundancy and high availability, see Redundancy and High Availability.
You can implement multiple blocks of components to build redundancy and failover into the environment using hardware load balancing. This use case builds on a simple deployment to explain how you can begin thinking about operational continuity. The following diagram illustrates:
Each component has a specific role with resource protection.
Note: For more information about the primary purpose of each component, see CA SiteMinder® Components. For more information about CA SiteMinder® redundancy and high availability, see Redundancy and High Availability.
You can implement additional clusters to help performance levels remain high as you scale to extend throughput. This use case builds on the multiple components for operational continuity use case to explain how you can begin thinking about your architecture in terms of scale.
The initial deployment section of the diagram illustrates:
Note: For more information about Agent and Policy Server redundancy and high availability, see Redundancy and High Availability.
Note: For more information about Policy Server and user store redundancy and high availability, see Redundancy and High Availability.
Note: For more information about Policy Server and policy store redundancy, see Redundancy and High Availability.
Each component has a specific role with resource protection.
Note: For more information about the primary purpose of each component, see CA SiteMinder® Components.
The Scaled for Capacity section of the diagram details another component block and illustrates:
Note: For more information about failover thresholds for Policy Server cluster failover thresholds, see the Policy Server Administration Guide.
You configure redundancy and high availability between logical blocks of CA SiteMinder® components to maintain system availability and performance.
When you configure a CA SiteMinder® Agent, a Host configuration file (named SmHost.conf by default), is created on the host server. The Agent uses the connection information in this Host configuration file to create an initial connection with a Policy Server.
After the initial connection is established, the Agent obtains subsequent Policy Server connection information from the Host Configuration Object (HCO) on the Policy Server.
You can configure the HCO to include multiple Policy Servers and specify the method the Agent uses to distribute requests among multiple Policy Servers.
A CA SiteMinder® Agent can distribute requests among multiple Policy Servers in the following ways:
Alternatively, you can configure the HCO to include a single virtual IP address configured on a hardware load balancer to expose multiple Policy Servers. In this case, the load balancer is responsible for failover and load balancing, rather than the Agent software.
Failover is the default HCO setting. In failover mode, a CA SiteMinder® Agent delivers all requests to the first Policy Server that the HCO lists and proceeds as follows:
Note: For more information about configuring an HCO with multiple Policy Servers, see the Policy Server Configuration Guide.
If an unresponsive Policy Server recovers, which the Agent determines through periodic polling, the Policy Server is automatically returned to its original place in the HCO list and begins receiving all Agent requests.
The following diagram illustrates the Agent failover process:
Round robin load balancing is an optional HCO setting. Round robin load balancing distributes requests evenly over a set of Policy Servers, which:
Note: For more information about configuring an HCO for round robin load balancing, see the Policy Server Configuration Guide.
In round robin mode, an Agent distributes requests across all Policy Servers that the HCO lists. An Agent:
If a Policy Server does not respond, the Agent redirects the request to the next Policy Server that the HCO lists. If the unresponsive Policy Server recovers, which the Agent determines through periodic polling, the Policy Server is automatically restored to its original place in the HCO list.
The following diagram illustrates the round robin process:
Round robin load balancing evenly distributes CA SiteMinder® Agent requests to all Policy Servers that the HCO lists. Although an efficient method to improve system availability and response times, consider that:
A Policy Server cluster is a group of Policy Servers to which Agents can distribute requests. Policy Server clusters provide the following benefits over round robin load balancing:
Note: For more information about configuring a Policy Server cluster, see the Policy Server Administration Guide.
The following diagram illustrates two Policy Server clusters. Each cluster is geographical separated to avoid the network overhead that can be associated with round robin load balancing.
Copyright © 2015 CA Technologies.
All rights reserved.
|
|