CA Technologies

CA Infrastructure Management Data Aggregator Readme Release 2.3.4


1.0 Welcome

2.0 How To Access Product Documentation

3.0 Upgrade Considerations

3.1 Data Repository Upgrade Fails Due to Use of Logical Volume Manager (LVM)

3.1.1 Data Repository - Single Node

3.1.2 Data Repository - Cluster

3.2 Reimport .csv File of Aliases for Monitored Devices

3.3 Migrate Existing CAMM Device Packs

3.4 CA Mediation Manager Known Limitation After Upgrade

3.5 Segment Database Tables

3.6 Change the Size of the Write Optimized Storage on Data Repository

3.7 CA Spectrum Support and Upgrade Considerations

3.8 Upgrade Impact for Environmental Sensor - Temperature Status (NormalizedTempSensorInfo) Metric Family and Device Components

4.0 Allocating Memory Usage

5.0 Prerequisite for Data Export

6.0 Known Issues

6.1 Data Repository Installation or Upgrade Incorrectly Detects Logical Volume Manager (LVM) and Fails

6.2 Data Repository Username and Data Repository Admin Username Cannot Be the Same

6.3 Multiple Octets and OOB Interface Metric Family

6.4 Unknown Status on Data Collector Instances That Must Be Upgraded

6.5 Dashboards Include Incorrect Devices or Component Items

6.6 Data Aggregator Memory Settings Not Stored in Release 2.3.1 and Release 2.3.2

7.0 Documentation Known Issues

7.1 Steps for Changing the Data Aggregator IP Address Are Incorrect

7.2 Steps for Setting Up Passwordless SSH for Root User Are Missing

7.3 Procedure in the Export Data Scenario is Unclear

7.4 MaxPercentofPollCycle Parameter Should Not Be Documented

7.5 Troubleshooting: Vertica Fails to Install in a Cluster Environment Topic Missing from Installation Guides

8.0 Contact CA Technologies


1.0 Welcome

Welcome to the CA Infrastructure Management Data Aggregator Readme. This Readme contains a complete list of the known issues for this release and details about how the features and enhancements for this release might affect you.


2.0 How To Access Product Documentation

This Readme contains the most recent list of known issues and workarounds. Additional product documentation is available from the Data Aggregator bookshelf, which can be accessed from the Help menu in the CA Performance Center user interface. The bookshelf can also be downloaded from CA Support. The bookshelf contains the Release Notes (with system requirements), online help, and guides in PDF and HTML format.

Context-sensitive online help is available for pages and views when you click a Help (?) button or select Help for This Page from the Help menu.


3.0 Upgrade Considerations

Upgrades of the CA Infrastructure Management software from previous releases are supported and are incremental. For information about upgrade paths, see the Data Aggregator Release Notes.


3.1 Data Repository Upgrade Fails Due to Use of Logical Volume Manager (LVM)

The following procedures describe how to transition from a Data Repository that is running Vertica 6.0.2 using LVM (Logical Volume Manager) for data and catalog directories to Vertica 6.0.2 using non-LVM. The Vertica database backs Data Repository and Vertica does not support its database running on LVM volumes. Vertica has never supported running its database on LVM. However, starting with Vertica 7.0.1-2 (Data Aggregator Release 2.3.4 requires Vertica 7.0.1-2), the Vertica installer enforces this requirement of not allowing Vertica to run on LVM.

The steps to migrate database directories that reside on LVM partitions to non-LVM partitions are described for both single node Data Repository deployments and clustered Data Repository deployments. If Data Repository is using volumes that LVM manages, Data Aggregator Release 2.3.4 cannot be installed.


3.1.1 Data Repository - Single Node

Important! Back up Data Repository before proceeding. Make sure that no scheduled backups will run during this time.

Important! You must have a local or networked partition with adequate free space to store the database contents temporarily while you convert the LVM partition.

Assumptions:

To proceed with the migration, do the following steps:

  1. Stop each Data Collector instance:
    1. ssh dc_hostname -l root
    2. /etc/init.d/dcmd stop
    3. /etc/init.d/dcmd status
  2. Stop Data Aggregator:
    1. ssh da_hostname -l root
    2. /etc/init.d/dadaemon stop
    3. /etc/init.d/dadaemon status
  3. As dradmin, stop the database:
    1. ssh dr_hostname -l dradmin
    2. Stop the database using /opt/vertica/bin/adminTools

Important! Do the following steps as the root user, unless otherwise specified.

  1. Make a temp directory, /tmp_data, to store the data directory contents temporarily. Make sure that the directory is located on a partition that has enough space to accommodate a full copy of the /data/drdata folder. This is a temporary storage location. The data will be moved from this location later.
    1. mkdir /tmp_data
    2. Verify that /tmp_data is mounted to the temporary partition:

      mount data_partition /tmp_data

    3. Make a note of the size of the /data directory for future reference in step #7:

      du -ch /data | grep -i total

    4. Determine the amount of free disk space on the destination partition:

      df -h /tmp_data

    5. Verify that there is enough free disk space on the destination partition (the partition for /tmp_data) to accommodate a full copy of the /data directory.
  2. Change the permissions of the /tmp_data folder:

    chown dradmin:verticadba /tmp_data

  3. Move the database into the new directory.

    mv /data/drdata /tmp_data

  4. Ensure the file size matches the size reported by step 4.c.:

    du -ch /tmp_data | grep -i total

  5. Make a temp directory, /tmp_catalog, to store the catalog directory. Make sure that the directory is located on a partition that has enough space to accommodate a full copy of the /catalog/drdata folder. This is a temporary storage location. The data will be moved from this location later.
    1. mkdir /tmp_catalog
    2. Verify that /tmp_catalog is mounted to the temporary partition:

      mount data_partition /tmp_catalog

    3. Make a note of the size of the /catalog directory for future reference in step 11:

      du -ch /catalog | grep -i total

    4. Determine the amount of free disk space on the destination partition:

      df -h /tmp_catalog

    5. Verify that there is enough free disk space on the destination partition (the partition for /tmp_catalog) to accommodate a full copy of the /catalog directory.
  6. Change the permissions of the /tmp_catalog folder:

    chown dradmin:verticadba /tmp_catalog

  7. Move the catalog into the new directory.

    mv /catalog/drdata /tmp_catalog

  8. Ensure the file size matches the size reported by step 8.c.:

    du -ch /tmp_catalog | grep -i total

  9. Make a note of the lvm mount points by recording output of mount:

    mount

  10. Unmount /data and /catalog:

    umount /data

    umount /catalog

    Note: If you get a "busy" related error, please ensure that all windows and applications are not accessing these directories.

  11. Re-establish non-LVM volume on /data and /catalog. There are three approaches:

    OR

    OR

  12. Remount all filesystems:

    mount -a

  13. Move the data from the temporary directories back into the /data and /catalog directories that Vertica knows:
    1. mv /tmp_data/drdata /data
    2. mv /tmp_catalog/drdata /catalog
  14. Ensure that the size of the /data directory matches the size reported by step 4.c.:

    du -ch /data | grep -i total

  15. Ensure that the size of the /catalog directory matches the size reported by step 8.c.:

    du -ch /catalog | grep -i total

  16. Restart the database:
    1. su – dradmin
    2. /opt/vertica/bin/adminTools

    Note: This can take several minutes to occur.

  17. Verify that the database is running:
    1. su - dradmin
    2. /opt/vertica/bin/adminTools
    3. Select "View Database Cluster State" and verify that the database state is "UP".
  18. Restart Data Aggregator:
    1. ssh da_hostname -l root
    2. /etc/init.d/dadaemon start
    3. /etc/init.d/dadaemon status
  19. Start each Data Collector instance:
    1. ssh dc_hostname -l root
    2. /etc/init.d/dcmd start
    3. /etc/init.d/dcmd status

3.1.2 Data Repository - Cluster

Important! Back up Data Repository before proceeding. Make sure that no scheduled backups will run during this time.

Assumptions:

To proceed with the migration, do the following steps:

  1. Stop each Data Collector instance:
    1. ssh dc_hostname -l root
    2. /etc/init.d/dcmd stop
    3. /etc/init.d/dcmd status
  2. Stop Data Aggregator:
    1. ssh da_hostname -l root
    2. /etc/init.d/dadaemon stop
    3. /etc/init.d/dadaemon status

Steps to Migrate a Node In a Cluster

Important! Do the following steps as the root user, unless otherwise specified.

Do the following steps for each node in the cluster. Follow all of the steps (steps 1-15) for one node at a time.

Important! Use adminTools to verify that the database is running.

  1. Make note of the IP address for the current node:

    ifconfig

  2. As the dradmin user, access adminTools:
    1. su - dradmin
    2. /opt/vertica/bin/adminTools
  3. Stop Vertica on the host:
    1. Navigate to "Advanced Tools Menu". Press enter.
    2. Navigate to "Stop Vertica on Host". Press enter.
    3. Select the appropriate host IP address as found in step 1 in the section, "Steps to Migrate a Node In a Cluster". Press Enter.
    4. Navigate to "Main Menu". Press enter.
    5. Navigate to "Exit". Press enter.
  4. Switch back to the root user:

    exit

  5. Verify that the following command outputs "root":

    whoami

  6. Remove the files from the /data directory:

    rm -rf /data/drdata

  7. Remove the files from the /catalog directory:

    rm -rf /catalog/drdata

  8. Record the output of the following commands for debugging purposes:
    1. mount
    2. cat /etc/fstab
  9. Unmount the /data LVM directory:

    umount /data

  10. Unmount the /catalog LVM directory:

    umount /catalog

  11. Re-establish non-LVM volume on /data and /catalog. There are three approaches:

    OR

  12. Remount all file systems:

    mount -a

  13. Create the drdata folder with correct permissions within /data and /catalog:
    1. mkdir -p /data/drdata
    2. mkdir -p /catalog/drdata
    3. chown -R dradmin:verticadba /data
    4. chown -R dradmin:verticadba /catalog
  14. Restart Vertica on the host:
    1. su - dradmin
    2. /opt/vertica/bin/adminTools
    3. Use the down arrow key to navigate to "Restart Vertica on host". Press enter.
  15. Continue to monitor adminTools. The status for the current node will remain as "Recovering" while the data is rebuilt. Do not continue until the database is back "UP". It can take a considerable amount of time for the database to transition to the“UP” state.
    1. Select "View Database Cluster State". Press enter.
    2. Press enter to escape to the Main Menu.

    After the database is back up, repeat steps 1-15, "Steps to Migrate a Node In a Cluster", for the next node. Continue through these steps until all Data Repository nodes are migrated off LVM.

After you complete the steps in the section "Steps to Migrate a Node In a Cluster" for all Data Repository nodes, do the following steps:

  1. Log in to any Data Repository node:

    su - dradmin

    /opt/vertica/bin/vsql -U dradmin –w drpass

  2. Run the following vsql commands to re-establish custom application settings:
    1. SELECT set_config_parameter('MaxClientSessions',1024);
    2. SELECT set_config_parameter('StandardConformingStrings','0');
  3. Start Data Aggregator:
    1. ssh da_hostname -l root
    2. /etc/init.d/dadaemon start
    3. /etc/init.d/dadaemon status
  4. Start all Data Collector instances:
    1. ssh dc_hostname -l root
    2. /etc/init.d/dcmd start
    3. /etc/init.d/dcmd status

3.2 Reimport .csv File of Aliases for Monitored Devices

If you imported a .csv file of alias names for monitored devices in CA Infrastructure Management Data Aggregator Release 2.3.3, reimport the file after you upgrade to Release 2.3.4. Alias names will not be recognized if you do not reimport the file.


3.3 Migrate Existing CAMM Device Packs

In CA Infrastructure Management Data Aggregator Release 2.3.4, there is a change in the way device packs are deployed and configured. If you are upgrading from Release 2.3.3 to Release 2.3.4, install CA Mediation Manager (CAMM) components and run the migration script to migrate the existing device packs. See the complete CAMM documentation set at https:www.wiki.ca.com/camm.


3.4 CA Mediation Manager Known Limitation After Upgrade

The architecture of the integration with CA Mediation Manager has been significantly enhanced. Version 2.2.6 of CA Mediation Manager is required to run with CA Infrastructure Management Release 2.3.4. However, that version of the integration does not support the Device Pack Generator utility.

Future versions of CA Mediation Manager will support an enhanced version of this utility. Until then, custom device packs are not supported.

Important! CA Mediation Manager 2.2.6 is not fully backward-compatible with previous versions of CA Infrastructure Management. To process the raw data, you must upgrade Data Collector to Release 2.3.4. Be sure to migrate your device packs before you upgrade CA Infrastructure Management. See the scenario on the CA Infrastructure Management Data Aggregator Documentation Bookshelf titled "How to Migrate Device Packs" for more information.


3.5 Segment Database Tables

If you are upgrading CA Infrastructure Management Data Aggregator and if Data Repository is installed in a cluster environment, verify that the database tables are segmented after you upgrade the Data Repository component and before you upgrade the Data Aggregator component.

Note: For more information about verifying if the database tables are segmented, see the CA Infrastructure Management Data Aggregator upgrade guides.


3.6 Change the Size of the Write Optimized Storage on Data Repository

If you are managing one million or more polled items, change the size of the Write Optimized Storage (WOS) on Data Repository from the default of 2 GB to an increased value of 4 GB. Because this operation requires Data Aggregator to be shut down, we recommend that you perform the following steps before upgrading Data Aggregator.

  1. Log in to the computer where Data Aggregator is installed. To stop Data Aggregator, open a command prompt and type the following command:

    service dadaemon stop

  2. SSH to a Data Repository node.
  3. To move all data that is in Write Optimized Storage (WOS) to Read Optimized Storage (ROS), type the following command:

    /opt/vertica/bin/vsql -U database_admin_user -w database_admin_user_password -c "select do_tm_task('moveout')";

  4. To verify that no data remains in WOS, type the following command:

    /opt/vertica/bin/vsql -U database_admin_user -w database_admin_user_password -c "select sum( region_in_use_size_kb ) as wos_usage_kb from wos_container_storage";

    If this command does not return a 0 value, wait 5 minutes and then issue the command again. If after 5 minutes the value that is returned is still greater than 0, retype the command in step 3 and then issue the command in this step again.

  5. To increase the size of WOS to 4 GB, type the following command:

    /opt/vertica/bin/vsql -U database_admin_user -w database_admin_user_password -c "alter resource pool wosdata maxMemorySize '4G'";


3.7 CA Spectrum Support and Upgrade Considerations

If you plan to register a CA Spectrum data source with CA Infrastructure Management Release 2.3.4, we recommend upgrading to CA Spectrum Release 9.3. Earlier versions of CA Spectrum do not fully support the following new features:

Note: For information about upgrading CA Spectrum to Release 9.3, see the CA Spectrum Release 9.3 documentation.


3.8 Upgrade Impact for Environmental Sensor - Temperature Status (NormalizedTempSensorInfo) Metric Family and Device Components

New device components discovered based on the Environmental Sensor - Temperature Status metric family have a new context page available out-of-the-box. However, existing device components that are associated with Environmental Sensor - Temperature Status may display on a different context page. This design lets you keep historical data. For existing device components to display on the same context page as the newly discovered device components, delete and rediscover the corresponding devices.


4.0 Allocating Memory Usage

Use the following information to help you configure memory usage for Data Aggregator:


5.0 Prerequisite for Data Export

Requirements for CPU, memory, network I/O for Data Aggregator have not changed when enabling the data export feature. However, there is an extra requirement for a second, separate partition of disk space storage for the data export. The size of the partition must be 50 GB for a medium size deployment. The 50 GB size permits the retention of one hour of data before a batch job moves the files to another file system.


6.0 Known Issues


6.1 Data Repository Installation or Upgrade Incorrectly Detects Logical Volume Manager (LVM) and Fails

Data Repository cannot be installed if Logical Volume Manager (LVM) is being used to manage volumes that Data Repository uses.

The Vertica database backs Data Repository and Vertica does not support its database running on LVM volumes. Vertica has never supported running its database on LVM. However, starting with Vertica 7 (Data Aggregator Release 2.3.4 requires Vertica 7), the Vertica installer enforces this requirement of not allowing Vertica to run on LVM.

There is a known issue with the Vertica 7.0.1-2 installer. If LVM is detected on any volumes (not just volumes that Vertica uses) within the cluster, the installer will generate a WARN message. The specific WARN message is as follows:

WARN (S0170): https://my.vertica.com/docs/7.0.x/HTML/index.htm#cshid=S0170

lvscan (LVM utility) indicates some active volumes.

If you encounter the WARN message during the execution of dr_install.sh and you have verified that the catalog and data directories that Vertica uses are not managed by LVM, take further steps to help ensure a successful installation or upgrade of Vertica.

Note: If the catalog and data directories that Vertica uses are managed by LVM, refer to the Upgrade Considerations section.

Important! Perform the following steps only after you have verified that the install.sh script has not generated any additional WARN or ERROR messages unrelated to LVM.

Do the following steps:

  1. Search for the line in the dr_install.sh script that begins with “/opt/vertica/sbin/install_vertica”. The line should look like the following line:

    /opt/vertica/sbin/install_vertica -s $DB_HOST_NAMES -u $DB_ADMIN_LINUX_USER -l $DB_ADMIN_LINUX_USER_HOME -d $DB_DATA_DIR -L ./resources/$VLICENSE -Y -r ./resources/$VERTICA_RPM_FILE $POINT_TO_POINT_SPREAD_OPTION 2>&1 | tee -a $LOG_FILE

  2. After the “-d $DB_DATA_DIR” entry in the line, add the following new entry, surrounded by a space on each side:

    --failure-threshold FAIL

    The line should now look like the following line:

    /opt/vertica/sbin/install_vertica -s $DB_HOST_NAMES -u $DB_ADMIN_LINUX_USER -l $DB_ADMIN_LINUX_USER_HOME -d $DB_DATA_DIR --failure-threshold FAIL -L ./resources/$VLICENSE -Y -r ./resources/$VERTICA_RPM_FILE $POINT_TO_POINT_SPREAD_OPTION 2>&1 | tee -a $LOG_FILE

    Adding this entry will help ensure that the installation will fail only if one or more FAIL messages are encountered during installation. The installation ignores the LVM WARN message and the installation completes successfully.

  3. To install or upgrade Vertica, re-execute the dr_install.sh script. The LVM-specific WARN message is bypassed.

    When you re-execute dr_install.sh, you will see the following LVM WARN message:

    WARN (S0170): https://my.vertica.com/docs/7.0.x/HTML/index.htm#cshid=S0170

    lvscan (LVM utility) indicates some active volumes.

    However, this WARN message will not block the installation or upgrade of Vertica 7.


6.2 Data Repository Username and Data Repository Admin Username Cannot Be the Same

When you install the Data Aggregator component and you are prompted for the Data Repository credentials, do not use the same username for the Data Repository username and the Data Repository admin username. The Data Aggregator enforces that these usernames are different during a new installation.


6.3 Multiple Octets and OOB Interface Metric Family

When you create a custom certification for Interface/Port components, if the index of the MIB table has multiple octets (for example: 23.4.5.12), then you cannot use the out-of-box Interface metric family for your certification. Using the Interface metric family in this situation causes synchronization problems in CA Performance Center.

Workaround:

Use the Alternate Interface metric family or create your own custom metric family. This action causes your interface/port items to show up under device components, although this result may not be ideal.


6.4 Unknown Status on Data Collector Instances That Must Be Upgraded

If a Data Collector instance has not been upgraded, it can appear in the System Status table for Data Collectors with a status of Unknown instead of indicating that an upgrade is required.


6.5 Dashboards Include Incorrect Devices or Component Items

If you are upgrading from Release 2.3.1 or prior and CA Performance Center dashboards seem to include devices or component items that are not present in the group that the dashboard was generated for, do the following steps before you upgrade CA Infrastructure Management Data Aggregator:

  1. Type the following command to start vsql:

    /opt/vertica/bin/vsql -U username -w password

  2. Run the following VSQL code:

    DELETE FROM item_relationship WHERE left_item_id IN (SELECT item_id FROM item g WHERE g.item_name='group name'); COMMIT;


6.6 Data Aggregator Memory Settings Not Stored in Release 2.3.1 and Release 2.3.2

The installers for Release 2.3.1 and Release 2.3.2 do not store Data Aggregator memory settings in the DA.cfg file when running in a language other than English.

If you already upgraded CA Infrastructure Management from Release 2.3.1 to Release 2.3.2 or Release 2.3.3 on any language other than English, do the following steps to update the Data Aggregator memory settings manually:

  1. Log in to the computer where Data Aggregator is installed. To stop Data Aggregator, open a command prompt and type the following command:

    service dadaemon stop

  2. Access the Data Aggregator installation directory/apache-karaf-2.3.0/bin/setenv.old file.
  3. Locate and make note of the number unit in the export IM_MAX_MEM=number unit line.
  4. Access the Data Aggregator installation directory/apache-karaf-2.3.0/bin/setenv file.
  5. Add the number unit that you noted earlier and add it to the export IM_MAX_MEM= line.
  6. Save the file.
  7. To restart Data Aggregator, type the following command:

    service dadaemon start

If you have not yet upgraded CA Infrastructure Management from Release 2.3.1 to Release 2.3.2 or Release 2.3.3 on any language other than English, do the following steps to update the Data Aggregator memory settings manually:

  1. Verify that the /etc/DA.cfg file does not have the da.memory variable defined. If the da.memory variable is defined, you can upgrade CA Infrastructure Management and you do not need to continue with this procedure. The following information is a sample DA.cfg file:

    da.home=/opt/IMDataAggregator

    da.karaf.home=/opt/IMDataAggregator/apache-karaf-2.3.0

    da.version=2.3.2.1

    da.user=root

    da.memory=2369M

    da.activemq.home=/opt/IMDataAggregator/broker/apache-activemq-5.5.1d

    da.activemq.memory=790M

    da.activemq.version=5.5.1d

  2. Access the Data Aggregator installation directory/apache-karaf-2.3.0/bin/setenv file. Locate and make note of the number unit in the export IM_MAX_MEM=number unit line.
  3. Access the Data Aggregator installation directory/broker/apache-activemq-5.5.1X directory and make a note of he activemq.home information.
  4. Access the Data Aggregator installation directory/broker/apache-activemq-5.5.1X/bin/activemq file. Locate and make note of the number unit in the ACTIVEMQ_OPTS_MEMORY=number unit line. Locate and make a note of the activemq version.
  5. Update the DA.cfg file with the following information that you made a note of:
  6. Upgrade CA Infrastructure Management.

7.0 Documentation Known Issues


7.1 Steps for Changing the Data Aggregator IP Address Are Incorrect

The steps for changing the Data Aggregator IP address in the topic "Troubleshooting: A Change in My Environment Requires That I Change the Data Aggregator IP Address" are incorrect. You do not have to modify any files with the Data Aggregator IP address change. Simply restart Data Aggregator for the IP address change to take effect.

Do the following steps:

  1. Stop Data Aggregator by logging in to the computer where Data Aggregator is installed as the root user. Type the following command:

    service dadaemon stop

    Data Aggregator stops.

  2. Restart Data Aggregator by logging in to the computer where Data Aggregator is installed as the root user. Open a command prompt and type the following command:

    service dadaemon start


7.2 Steps for Setting Up Passwordless SSH for Root User Are Missing

In the Data Aggregator installation guides, the topic "Install the Data Repository Component", is missing an optional step. After step 7, and before step 8, you can optionally set up passwordless SSH for the root user in cluster environments. When you set up passwordless SSH, the root user is not required to input credentials during installation. To set up passwordless ssh for the root user from one Data Repository host to another, do the following steps:

  1. Open a console and log in to the Data Repository host as the root user.
  2. Type the following commands:

    ssh-keygen -N "" -t rsa -f ~/.ssh/id_rsa

    cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys2

    chmod 644 ~/.ssh/authorized_keys2

  3. To copy the root user public key into the remote host's list of authorized keys, type the following command:

    ssh-copy-id -i root_user@remotehost

  4. To verify that passwordless ssh is set up correctly, login to the remote host from the local host:

    ssh root_user@remotehost ls

  5. Repeat steps 1-4 for each pair of hosts.

    Note: A three-node cluster requires six variations of the previous steps.

If the passwordless SSH has been set up successfully, you are not prompted for a password. You also see a directory listing from the ‘ls command’.


7.3 Procedure in the Export Data Scenario is Unclear

The procedure, "Start the Rate Data Export Feature" in the scenario "Export Data" is unclear. The correct steps for this procedure are as follows:

  1. Set up a REST client with a connection to the Data Aggregator server.
  2. Set the REST client's Content-type to application/xml.
  3. Enter the following URL:

    GET http://da_hostname:port/rest/dataexport/

  4. Take note of the id of the data export profile you wish to modify. By default, there is only one profile.
  5. Enter text in the Body tab of the HTTP Request pane. At a minimum, set Enabled to true. For example:

       <DataExportInfo version="1.0.0">

                   <Enabled>true</Enabled>

       </DataExportInfo>

  6. Review other options that can be set at the following URL:

    http://da_hostname:port/rest/dataexport/xsd/get.xsd

  7. Save and start the rate data export feature by entering the following URL:

    PUT http://da_hostname:port/rest/dataexport/id

  8. To verify that your changes took effect, enter the following URL:

    GET http://da_hostname:port/rest/dataexport/id

The data export starts automatically and temporary export files are created.

When the export file is ready, the exporter automatically renames it to the file extension you configured previously (such as .csv).

You do not need to restart the services for a newly written file.

After the data is exported, copy the data to your other system using the method required by that other system.

Important! You must regularly process the output CSV files to avoid exhausting the available disk space.


7.4 MaxPercentofPollCycle Parameter Should Not Be Documented

The scenario, "Monitor the Health of the Threshold Monitoring Engine" incorrectly discusses how to increase the MaxPercentOfPollCycle parameter. This parameter has been removed from the product and is not available.


7.5 Troubleshooting: Vertica Fails to Install in a Cluster Environment Topic Missing from Installation Guides

The topic "Troubleshooting: Vertica Fails to Install in a Cluster Environment " is missing from the Data Aggregator installation guides.

Symptom:

Vertica fails to install in my cluster environment.

Solution:

Set up passwordless SSH for the Vertica Linux database administrator user and then retry the installation. Do the following steps to set up passwordless SSH:

  1. Open a console and log in to the Data Repository host as the Vertica Linux database administrator user.
  2. Type the following commands:

    ssh-keygen -N "" -t rsa -f ~/.ssh/id_rsa

    cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys2

    chmod 644 ~/.ssh/authorized_keys2

  3. To copy the Vertica Linux database administrator user public key into the remote host's list of authorized keys, type the following command:

    ssh-copy-id -i database_admin_user@remotehost

  4. To verify that passwordless ssh is set up correctly, login to the remote host from the local host:

    ssh database_admin_user@remotehost ls

  5. Repeat steps 1-4 for each pair of hosts.

    Note: A three-node cluster requires six variations of the previous steps.

If the passwordless SSH has been set up successfully, you are not prompted for a password. You also see a directory listing from the ‘ls command’.


8.0 Contact CA Technologies

Contact CA Support

For your convenience, CA Technologies provides one site where you can access the information that you need for your Home Office, Small Business, and Enterprise CA Technologies products. At http://ca.com/support, you can access the following resources:

Providing Feedback About Product Documentation

If you have comments or questions about CA Technologies product documentation, you can send a message to techpubs@ca.com.

To provide feedback about CA Technologies product documentation, complete our short customer survey which is available on the CA Support website at http://ca.com/docs.