CA Technologies

CA Performance Management Data Aggregator Readme 2.4.1


1.0 Welcome

2.0 How To Access Product Documentation

3.0 Supported Upgrade Paths

4.0 Upgrade Considerations

4.1 Data Repository Upgrade Fails Due to Use of Logical Volume Manager (LVM)

4.1.1 Data Repository - Single Node

4.1.2 Data Repository - Cluster

4.2 CA Mediation Manager Known Limitation After Upgrade

4.3 Segment Database Tables

4.4 Change the Size of the Write Optimized Storage on Data Repository

4.5 CA Spectrum Support and Upgrade Considerations

5.0 Prerequisite for Data Export

6.0 Reduce CAMM DC Upgrade Run Time

7.0 Longevity Upgrade Time Reduced (2.4.1)

8.0 Known Issues

8.1 Data Repository Installation or Upgrade Incorrectly Detects Logical Volume Manager (LVM) and Fails

8.2 Data Repository Username and Data Repository Admin Username Cannot Be the Same

8.3 Multiple Octets and OOB Interface Metric Family

8.4 Alu7750 Customer Certification Does Not Work as Expected

8.5 Outdated References to Vertica in Documentation

9.0 Contact CA Technologies


1.0 Welcome

Welcome to the Data Aggregator Readme. This Readme contains a complete list of the known issues for this release and details about how the features and enhancements for this release might affect you.


2.0 How To Access Product Documentation

This Readme contains the most recent list of known issues and workarounds. Additional product documentation is available from the Data Aggregator bookshelf, which can be accessed from the Help menu in the CA Performance Center user interface. The bookshelf can also be downloaded from CA Support. The bookshelf contains the Release Notes (with system requirements), online help, and guides in PDF and HTML format.

Context-sensitive online help is available for pages and views when you click a Help (?) button or select Help for This Page from the Help menu.


3.0 Supported Upgrade Paths

If you are upgrading from a previous release of Data Aggregator, upgrade your components. You always upgrade the CA Performance Center, Data Aggregator, and Data Collector components. Upgrade Data Repository when you are upgrading to the releases identified in the table that follows.

Important! If you are upgrading from Release 2.0.00 to Release 2.4, upgrade to Release 2.1.00, upgrade to Release 2.2.x, and then upgrade to Release 2.3, first.

The following table indicates the supported upgrade paths and indicates which components to upgrade:

Release

CA Performance Center Component

Data Aggregator Component

Data Collector Component

Data Repository Component

Release 2.0.00 to Release 2.1.00

Upgrade Required

Upgrade Required

Upgrade Required

Upgrade Not Required

Release 2.1.00 to Release 2.2.00

Upgrade Required

Upgrade Required

Upgrade Required

Upgrade Required

Release 2.2.00 to Release 2.2.1

Upgrade Required

Upgrade Required

Upgrade Required

Upgrade Not Required

Release 2.2.00 or 2.2.1 to 2.2.2

Upgrade Required

Upgrade Required

Upgrade Required

Upgrade Required

Release 2.2.[1, 2, 3] to 2.3.[0, 1, 2, 3]

Upgrade Required

Upgrade Required

Upgrade Required

Upgrade Not Required

Release 2.2.x to 2.3.4

Upgrade Required

Upgrade Required

Upgrade Required

Upgrade Required

Note: Vertica Release 7 is introduced in Release 2.3.4.

Release 2.3.[0, 1, 2, 3] to Release 2.3.4

Upgrade Required

Upgrade Required

Upgrade Required

Upgrade Required

Note: Vertica Release 7 is introduced in Release 2.3.4.

Release 2.3.4 to Release 2.4

Upgrade Required

Upgrade Required

Upgrade Required

Upgrade Not Required

Release 2.3.3 to Release 2.4

Upgrade Required

Upgrade Required

Upgrade Required

Upgrade Required

Release 2.4 to Release 2.4.1

Upgrade Required

Upgrade Required

Upgrade Required

Upgrade Required

Release 2.3.4 to Release 2.4.1

Upgrade Required

Upgrade Required

Upgrade Required

Upgrade Required

Note: For information about upgrading Data Aggregator components, see the Data Aggregator Installation Guide. For information about upgrade requirements and considerations for releases before 2.3.x, see the Release Notes or Fixed Issues file for the release to which you are upgrading.


4.0 Upgrade Considerations

Upgrades of the CA Performance Management software from previous releases are supported, and are incremental. For information about upgrade paths, see the Data Aggregator Release Notes.


4.1 Data Repository Upgrade Fails Due to Use of Logical Volume Manager (LVM)

The following procedures describe how to transition from a Data Repository that is running Vertica 6.0.2 using LVM (Logical Volume Manager) for data and catalog directories to Vertica 6.0.2 using non-LVM. The Vertica database backs Data Repository and Vertica does not support its database running on LVM volumes. Vertica has never supported running its database on LVM. However, starting with Vertica 7.0.1-2 (Data Aggregator Release 2.3.4 and Release 2.4 requires Vertica 7.0.1-2), the Vertica installer enforces this requirement of not allowing Vertica to run on LVM.

The steps to migrate database directories that reside on LVM partitions to non-LVM partitions are described for both single node Data Repository deployments and clustered Data Repository deployments. If Data Repository is using volumes that LVM manages, Data Aggregator Release 2.3.4 and Release 2.4 cannot be installed.


4.1.1 Data Repository - Single Node

Important! Back up Data Repository before proceeding. Make sure that no scheduled backups will run during this time.

Important! You must have a local or networked partition with adequate free space to store the database contents temporarily while you convert the LVM partition.

Assumptions:

To proceed with the migration, do the following steps:

  1. Stop each Data Collector instance:
    1. ssh dc_hostname -l root
    2. /etc/init.d/dcmd stop
    3. /etc/init.d/dcmd status
  2. Stop Data Aggregator:
    1. ssh da_hostname -l root
    2. /etc/init.d/dadaemon stop
    3. /etc/init.d/dadaemon status
  3. As dradmin, stop the database:
    1. ssh dr_hostname -l dradmin
    2. Stop the database using /opt/vertica/bin/adminTools

Important! Do the following steps as the root user, unless otherwise specified.

  1. Make a temp directory, /tmp_data, to store the data directory contents temporarily. Make sure that the directory is located on a partition that has enough space to accommodate a full copy of the /data/drdata folder. This is a temporary storage location. The data will be moved from this location later.
    1. mkdir /tmp_data
    2. Verify that /tmp_data is mounted to the temporary partition:

      mount data_partition /tmp_data

    3. Make a note of the size of the /data directory for future reference in step #7:

      du -ch /data | grep -i total

    4. Determine the amount of free disk space on the destination partition:

      df -h /tmp_data

    5. Verify that there is enough free disk space on the destination partition (the partition for /tmp_data) to accommodate a full copy of the /data directory.
  2. Change the permissions of the /tmp_data folder:

    chown dradmin:verticadba /tmp_data

  3. Move the database into the new directory.

    mv /data/drdata /tmp_data

  4. Ensure the file size matches the size reported by step 4.c.:

    du -ch /tmp_data | grep -i total

  5. Make a temp directory, /tmp_catalog, to store the catalog directory. Make sure that the directory is located on a partition that has enough space to accommodate a full copy of the /catalog/drdata folder. This is a temporary storage location. The data will be moved from this location later.
    1. mkdir /tmp_catalog
    2. Verify that /tmp_catalog is mounted to the temporary partition:

      mount data_partition /tmp_catalog

    3. Make a note of the size of the /catalog directory for future reference in step 11:

      du -ch /catalog | grep -i total

    4. Determine the amount of free disk space on the destination partition:

      df -h /tmp_catalog

    5. Verify that there is enough free disk space on the destination partition (the partition for /tmp_catalog) to accommodate a full copy of the /catalog directory.
  6. Change the permissions of the /tmp_catalog folder:

    chown dradmin:verticadba /tmp_catalog

  7. Move the catalog into the new directory.

    mv /catalog/drdata /tmp_catalog

  8. Ensure the file size matches the size reported by step 8.c.:

    du -ch /tmp_catalog | grep -i total

  9. Make a note of the lvm mount points by recording output of mount:

    mount

  10. Unmount /data and /catalog:

    umount /data

    umount /catalog

    Note: If you get a "busy" related error, please ensure that all windows and applications are not accessing these directories.

  11. Re-establish non-LVM volume on /data and /catalog. There are three approaches:

    OR

    OR

  12. Remount all filesystems:

    mount -a

  13. Move the data from the temporary directories back into the /data and /catalog directories that Vertica knows:
    1. mv /tmp_data/drdata /data
    2. mv /tmp_catalog/drdata /catalog
  14. Ensure that the size of the /data directory matches the size reported by step 4.c.:

    du -ch /data | grep -i total

  15. Ensure that the size of the /catalog directory matches the size reported by step 8.c.:

    du -ch /catalog | grep -i total

  16. Restart the database:
    1. su – dradmin
    2. /opt/vertica/bin/adminTools

    Note: This can take several minutes to occur.

  17. Verify that the database is running:
    1. su - dradmin
    2. /opt/vertica/bin/adminTools
    3. Select "View Database Cluster State" and verify that the database state is "UP".
  18. Restart Data Aggregator:
    1. ssh da_hostname -l root
    2. /etc/init.d/dadaemon start
    3. /etc/init.d/dadaemon status
  19. Start each Data Collector instance:
    1. ssh dc_hostname -l root
    2. /etc/init.d/dcmd start
    3. /etc/init.d/dcmd status

4.1.2 Data Repository - Cluster

Important! Back up Data Repository before proceeding. Make sure that no scheduled backups will run during this time.

Assumptions:

To proceed with the migration, do the following steps:

  1. Stop each Data Collector instance:
    1. ssh dc_hostname -l root
    2. /etc/init.d/dcmd stop
    3. /etc/init.d/dcmd status
  2. Stop Data Aggregator:
    1. ssh da_hostname -l root
    2. /etc/init.d/dadaemon stop
    3. /etc/init.d/dadaemon status

Steps to Migrate a Node In a Cluster

Important! Do the following steps as the root user, unless otherwise specified.

Do the following steps for each node in the cluster. Follow all of the steps (steps 1-15) for one node at a time.

Important! Use adminTools to verify that the database is running.

  1. Make note of the IP address for the current node:

    ifconfig

  2. As the dradmin user, access adminTools:
    1. su - dradmin
    2. /opt/vertica/bin/adminTools
  3. Stop Vertica on the host:
    1. Navigate to "Advanced Tools Menu". Press enter.
    2. Navigate to "Stop Vertica on Host". Press enter.
    3. Select the appropriate host IP address as found in step 1 in the section, "Steps to Migrate a Node In a Cluster". Press Enter.
    4. Navigate to "Main Menu". Press enter.
    5. Navigate to "Exit". Press enter.
  4. Switch back to the root user:

    exit

  5. Verify that the following command outputs "root":

    whoami

  6. Remove the files from the /data directory:

    rm -rf /data/drdata

  7. Remove the files from the /catalog directory:

    rm -rf /catalog/drdata

  8. Record the output of the following commands for debugging purposes:
    1. mount
    2. cat /etc/fstab
  9. Unmount the /data LVM directory:

    umount /data

  10. Unmount the /catalog LVM directory:

    umount /catalog

  11. Re-establish non-LVM volume on /data and /catalog. There are three approaches:

    OR

  12. Remount all file systems:

    mount -a

  13. Create the drdata folder with correct permissions within /data and /catalog:
    1. mkdir -p /data/drdata
    2. mkdir -p /catalog/drdata
    3. chown -R dradmin:verticadba /data
    4. chown -R dradmin:verticadba /catalog
  14. Restart Vertica on the host:
    1. su - dradmin
    2. /opt/vertica/bin/adminTools
    3. Use the down arrow key to navigate to "Restart Vertica on host". Press enter.
  15. Continue to monitor adminTools. The status for the current node will remain as "Recovering" while the data is rebuilt. Do not continue until the database is back "UP". It can take a considerable amount of time for the database to transition to the“UP” state.
    1. Select "View Database Cluster State". Press enter.
    2. Press enter to escape to the Main Menu.

    After the database is back up, repeat steps 1-15, "Steps to Migrate a Node In a Cluster", for the next node. Continue through these steps until all Data Repository nodes are migrated off LVM.

After you complete the steps in the section "Steps to Migrate a Node In a Cluster" for all Data Repository nodes, do the following steps:

  1. Log in to any Data Repository node:

    su - dradmin

    /opt/vertica/bin/vsql -U dradmin –w drpass

  2. Run the following vsql commands to re-establish custom application settings:
    1. SELECT set_config_parameter('MaxClientSessions',1024);
    2. SELECT set_config_parameter('StandardConformingStrings','0');
  3. Start Data Aggregator:
    1. ssh da_hostname -l root
    2. /etc/init.d/dadaemon start
    3. /etc/init.d/dadaemon status
  4. Start all Data Collector instances:
    1. ssh dc_hostname -l root
    2. /etc/init.d/dcmd start
    3. /etc/init.d/dcmd status

4.2 CA Mediation Manager Known Limitation After Upgrade

The architecture of the integration with CA Mediation Manager has been significantly enhanced. Version 2.2.6 or higher of CA Mediation Manager is required to run with CA Performance Management 2.3.4 or higher. However, that version of the integration does not support the Device Pack Generator utility.

Future versions of CA Mediation Manager will support an enhanced version of this utility. Until then, you cannot build custom device packs with this utility.

Important! CA Mediation Manager 2.2.6 is not fully backward-compatible with previous versions of CA Performance Management. To process the raw data, you must upgrade Data Collector to Release 2.4. Be sure to migrate your device packs before you upgrade CA Performance Management. See the scenario on the CA Performance Management Data Aggregator Documentation Bookshelf titled "How to Migrate Device Packs" for more information.


4.3 Segment Database Tables

If you are upgrading CA Performance Management Data Aggregator and if Data Repository is installed in a cluster environment, verify that the database tables are segmented after you upgrade the Data Repository component and before you upgrade the Data Aggregator component.

Note: For more information about verifying if the database tables are segmented, see the CA Performance Management Data Aggregator upgrade guides.


4.4 Change the Size of the Write Optimized Storage on Data Repository

If you are managing one million or more polled items, change the size of the Write Optimized Storage (WOS) on Data Repository from the default of 2 GB to an increased value of 4 GB. Because this operation requires Data Aggregator to be shut down, we recommend that you perform the following steps before upgrading Data Aggregator.

  1. Log in to the computer where Data Aggregator is installed. To stop Data Aggregator, open a command prompt and type the following command:

    service dadaemon stop

  2. SSH to a Data Repository node.
  3. To move all data that is in Write Optimized Storage (WOS) to Read Optimized Storage (ROS), type the following command:

    /opt/vertica/bin/vsql -U database_admin_user -w database_admin_user_password -c "select do_tm_task('moveout')";

  4. To verify that no data remains in WOS, type the following command:

    /opt/vertica/bin/vsql -U database_admin_user -w database_admin_user_password -c "select sum( region_in_use_size_kb ) as wos_usage_kb from wos_container_storage";

    If this command does not return a 0 value, wait 5 minutes and then issue the command again. If after 5 minutes the value that is returned is still greater than 0, retype the command in step 3 and then issue the command in this step again.

  5. To increase the size of WOS to 4 GB, type the following command:

    /opt/vertica/bin/vsql -U database_admin_user -w database_admin_user_password -c "alter resource pool wosdata maxMemorySize '4G'";


4.5 CA Spectrum Support and Upgrade Considerations

If you plan to register a CA Spectrum data source with CA Performance Management Release 2.4, we recommend upgrading to CA Spectrum Release 9.4. Earlier versions of CA Spectrum do not fully support the following new features:

Note: For information about upgrading CA Spectrum to Release 9.4, see the CA Spectrum Release 9.4 documentation.


5.0 Prerequisite for Data Export

Requirements for CPU, memory, network I/O for Data Aggregator have not changed when enabling the data export feature. However, there is an extra requirement for a second, separate partition of disk space storage for the data export. The size of the partition must be 50 GB for a medium size deployment. The 50 GB size permits the retention of one hour of data before a batch job moves the files to another file system.


6.0 Reduce CAMM DC Upgrade Run Time

If you have CAMM installed on a Data Collector, the upgrade can take longer than one hour due to a limitation within InstallAnywhere. The workaround for this issue is available in the following Knowledge Base article: https://communities.ca.com/thread/241693769.


7.0 Longevity Upgrade Time Reduced (2.4.1)

In CA Performance Management Data Aggregator 2.4, upgrading a Data Collector that is collecting data from CAMM took more than 5 hours. When you upgrade to CA Performance Management Data Aggregator 2.4.1, the time to upgrade a Data Collector will be greatly reduced.


8.0 Known Issues


8.1 Data Repository Installation or Upgrade Incorrectly Detects Logical Volume Manager (LVM) and Fails

Data Repository cannot be installed if Logical Volume Manager (LVM) is being used to manage volumes that Data Repository uses.

The Vertica database backs Data Repository and Vertica does not support its database running on LVM volumes. Vertica has never supported running its database on LVM. However, starting with Vertica 7 (Data Aggregator Release 2.3.4 requires Vertica 7), the Vertica installer enforces this requirement of not allowing Vertica to run on LVM.

There is a known issue with the Vertica 7.0.1-2 installer. If LVM is detected on any volumes (not just volumes that Vertica uses) within the cluster, the installer will generate a WARN message. The specific WARN message is as follows:

WARN (S0170): https://my.vertica.com/docs/7.0.x/HTML/index.htm#cshid=S0170

lvscan (LVM utility) indicates some active volumes.

If you encounter the WARN message during the execution of dr_install.sh and you have verified that the catalog and data directories that Vertica uses are not managed by LVM, take further steps to help ensure a successful installation or upgrade of Vertica.

Note: If the catalog and data directories that Vertica uses are managed by LVM, refer to the Upgrade Considerations section.

Important! Perform the following steps only after you have verified that the install.sh script has not generated any additional WARN or ERROR messages unrelated to LVM.

Do the following steps:

  1. Search for the line in the dr_install.sh script that begins with “/opt/vertica/sbin/install_vertica”. The line should look like the following line:

    /opt/vertica/sbin/install_vertica -s $DB_HOST_NAMES -u $DB_ADMIN_LINUX_USER -l $DB_ADMIN_LINUX_USER_HOME -d $DB_DATA_DIR -L ./resources/$VLICENSE -Y -r ./resources/$VERTICA_RPM_FILE $POINT_TO_POINT_SPREAD_OPTION 2>&1 | tee -a $LOG_FILE

  2. After the “-d $DB_DATA_DIR” entry in the line, add the following new entry, surrounded by a space on each side:

    --failure-threshold FAIL

    The line should now look like the following line:

    /opt/vertica/sbin/install_vertica -s $DB_HOST_NAMES -u $DB_ADMIN_LINUX_USER -l $DB_ADMIN_LINUX_USER_HOME -d $DB_DATA_DIR --failure-threshold FAIL -L ./resources/$VLICENSE -Y -r ./resources/$VERTICA_RPM_FILE $POINT_TO_POINT_SPREAD_OPTION 2>&1 | tee -a $LOG_FILE

    Adding this entry will help ensure that the installation will fail only if one or more FAIL messages are encountered during installation. The installation ignores the LVM WARN message and the installation completes successfully.

  3. To install or upgrade Vertica, re-execute the dr_install.sh script. The LVM-specific WARN message is bypassed.

    When you re-execute dr_install.sh, you will see the following LVM WARN message:

    WARN (S0170): https://my.vertica.com/docs/7.0.x/HTML/index.htm#cshid=S0170

    lvscan (LVM utility) indicates some active volumes.

    However, this WARN message will not block the installation or upgrade of Vertica 7.


8.2 Data Repository Username and Data Repository Admin Username Cannot Be the Same

When you install the Data Aggregator component and you are prompted for the Data Repository credentials, do not use the same username for the Data Repository username and the Data Repository admin username. The Data Aggregator enforces that these usernames are different during a new installation.


8.3 Multiple Octets and OOB Interface Metric Family

When you create a custom certification for Interface/Port components, if the index of the MIB table has multiple octets (for example: 23.4.5.12), then you cannot use the out-of-box Interface metric family for your certification. Using the Interface metric family in this situation causes synchronization problems in CA Performance Center.

Workaround:

Use the Alternate Interface metric family or create your own custom metric family. This action causes your interface/port items to show up under device components, although this result may not be ideal.


8.4 Alu7750 Customer Certification Does Not Work as Expected

Symptom:

When interfaces are multi-indexed, CA Performance Center shows only one interface after discovery.

Example

ifIndex=1.3.5.2.5.23...

Solution:

The multi-index is automatically hashed into integers by Data Aggregator, and then synced to CA Performance Center. From the Interface Inventory List, hashed values are shown in the ifIndex column for multi-index interfaces. For single-indexed interfaces, the original index value is shown.


8.5 Outdated References to Vertica in Documentation

All references to the Vertica Database in the Performance Management Data Aggregator 2.4.1 Documentation are outdated. Wherever Vertica 7.0.1-2 appears, the correct version number is 7.0.2-5. This change applies only to CA Performance Management Data Aggregator 2.4.1.


9.0 Contact CA Technologies

Contact CA Support

For your convenience, CA Technologies provides one site where you can access the information that you need for your Home Office, Small Business, and Enterprise CA Technologies products. At http://ca.com/support, you can access the following resources:

Providing Feedback About Product Documentation

If you have comments or questions about CA Technologies product documentation, you can send a message to techpubs@ca.com.

To provide feedback about CA Technologies product documentation, complete our short customer survey which is available on the CA Support website at http://ca.com/docs.