To help prevent service interruptions, SiteMinder includes a failover feature. If the primary Policy Server fails and failover is enabled, a backup Policy Server takes over policy operations. Beginning with SiteMinder v6.0, failover can occur not only between Policy Servers, but between groups, or clusters, of Policy Servers.
The cluster functionality also improves server performance by providing dynamic load balancing between the servers in a cluster. With dynamic load balancing, policy operations are automatically distributed between the available servers in a cluster according to the performance capabilities of each server.
An agent running against Agent API v6.x can be associated with one or more Policy Servers, or with one or more clusters of Policy Servers, as follows:
In the ServerDef object for each clustered server, set clusterSeq() to the sequence number for the cluster. All servers in a cluster have the same cluster sequence number.
Behavior: Failover occurs between clusters of servers if multiple clusters are defined. Also, requests to servers within a cluster are sent according to the improved performance-based load-balancing techniques introduced with Agent API v6.0.
In the ServerDef object for each non-clustered server, set the method clusterSeq() to 0.
Behavior: Behavior is the same as in v5.x installations—that is, you can enable failover among the servers associated with the agent, or you can enable round-robin behavior among the servers.
When round-robin behavior is enabled, the improved performance-based load-balancing techniques introduced with Agent API v6.0 are used.
Note: The same agent cannot be associated with both clustered and non-clustered servers.
You can configure a cluster through the Agent API or through a host configuration object using the Policy Server User Interface. If you configure a cluster through the Agent API, be sure that the configuration does not conflict with any cluster configuration information that may be defined in the host configuration object.
You configure the individual servers or clusters of servers that the agent is associated with through the InitDef and ServerDef classes.
Cluster failover occurs according to the following configuration settings:
The failover threshold percentage applies to all clusters associated with the agent.
To determine the number of servers that the percentage represents in any given cluster, multiply the threshold percentage by the number of servers in the cluster, rounding to the nearest integer. For example, with a 60-percent failover threshold for a cluster of five servers, failover to the next cluster occurs when the number of available servers in the currently active cluster falls below 3.
Set through: InitDef constructor that contains the failOverThreshold parameter.
Set through: ServerDef.clusterSeq().
If your site uses clusters, you typically will have a primary cluster and one or more backup clusters.
The primary cluster is the cluster to which SiteMinder sends requests as long as the number of available servers in the cluster does not fall below the failover threshold. If there are not enough available servers in the primary cluster, failover to the next cluster in the cluster sequence occurs. If that cluster also fails, failover to the third cluster occurs, and so on.
If the number of available servers falls below the failover threshold in all clusters associated with the agent, policy operations do not stop. Requests are sent to the first cluster in the cluster sequence that has at least one available server.
For example, suppose an agent is associated with two clusters—C1 containing three servers, and C2 containing five servers.The failover threshold for any cluster associated with the agent is set at 60 percent.
The following table shows the minimum number of servers that must be available within each cluster:
Cluster |
Servers in Cluster |
Percentage |
Numeric |
---|---|---|---|
C1 |
3 |
60 |
1 |
C2 |
5 |
60 |
3 |
If the number of available servers falls below the threshold in each cluster, so that C1 has no available servers and C2 has just two, the next incoming request will be dispatched to a C2 server with the best response time. After at least two of the three C1 servers are repaired, subsequent requests are load-balanced among the available C1 servers.
Agent API v6 is backwards-compatible with Agent API v5, allowing complete interoperability between v5/v6 agents and the v5/v6 Agent APIs.
Copyright © 2015 CA Technologies.
All rights reserved.
|
|