|
Last Updated: July 17 2009 |
The CA NSM software package consists of a number of different components that must communicate with each other as well as with non-NSM components (e.g., Windows and UNIX OS, etc.). For this reason, ensuring that the proper communication protocols and connections are in place prior to deployment is critical.
NSM uses:
- TCP/IP
- UDP
- ORB-protocols (Agent Technology)
Refer to the Network Management section for additional information on communications protocols and how they are used by CA NSM.
By default the DSM assumes that all agents in a domain have the same community strings (i.e., "passwords") for read and write, namely:
Admin = write
Public = read
NSM utilizes a number of communication ports and this can become an issue if any of the components (e.g., Agents) are deployed on different sides of a company firewall. For a list of ports and options for working with firewall restrictions, click here.
DNS errors can easily compromise communication between NSM components. In addition to verifying that DNS has been properly configured, you should be aware the search order used for DNS name resolution will depend on the utility that is used to make the call. If an application (e.g., TCP/IP utility) issues a gethostbyname API call, the search path is as follows:
- Check local host
- Check Host file
- Check DNS server
- Try NETBIOS name resolution
The MS knowledge base, on the other hand, lists the search sequence for resolving a NETBIOS name as:
- NetBios name cache
- WINS server
- B-node broadcast
- LMHOSTS file
- HOSTS file
- DNS server
In the event you are working with a DHCP client, you can use either of the following commands to determine the machine's IP address:
ping -a <ip address> or <nodename>
or
ipconfig
Network Cards (NIC)
There may be several times when you install agent technology components on a machine with multiple network cards. In some of these cases you may need to force the agent technology components to communicate over a particular NIC. By default, agent technology will bind to the primary NIC. If the agent services cannot communicate to the DSM over this NIC then they may fail and you may find yourself facing other problems such as traps being sent with the wrong IP or performance agents failing to receive a profile or send a cube.
One example where this often occurs is on a clustered box. In a clustered environment the machines usually have two NICs. The first is used for internal communication between the machines of the cluster and the second is used for external communication. In this case you would need to bind all the agent technology processes to the second NIC so they
can communicate externally with the managers (or agents). Here's what you would need to do to bind the agent technology components to a particular IP address:
- Verify that all current patches have been applied. .
- If you're planning to use performance agents or anything that uses CAM then you need to do the following, otherwise move on to step 3:
- Run: camsave config This will generate a file called save.cfg.
- Rename this file to cam.cfg and make sure it's under the %AGENTWORKS_DIR\services directory.
- Verify the following environment variables point to the directory where this file resides: CAI_CAFT CAI_MSQ
- Modify the cam.cfg file and add:
*routing forward 127.0.0.1={local IP address to use}
- Modify the file: %AGENTWORKS_DIR\services\config\aws_orb\quick.cfg (this file is responsible for all the orb traffic). Comment out the first plugin line and modify the second as documented in the file so it looks like:
# PLUGIN awm_qikpipe aws_orb22 PLUGIN awm_qiksoc 7774@<your target IP address here>
- Modify the file: %AGENTWORKS_DIR\services\config\aws_sadmin\aws_sadmin.cfg This file is responsible for agent poll responses. Uncomment out the setting TRAP_OVERRIDE_ADDR and specify the address you're binding to such as:
TRAP_OVERRIDE_ADDR xxx.xxx.xxx.xxx
- Finally, modify the file: %AGENTWORKS_DIR\services\config\aws_snmp\aws_snmp.cfg In some cases this file may not exist yet. If it does not, you can copy it from another machine.
Note: You must uncomment the setting IP_TO_BIND and specify the IP to bind to such as: IP_TO_BIND xxx.xxx.xxx.xxx