

Getting Ready › Release Notes › Known Issues and Important Notes
Known Issues and Important Notes
This chapter contains information about known issues with CA AppLogic. It also contains a section containing important notes about the installation, configuration, and use of CA AppLogic.
This section contains the following topics:
Important Notes
Known Issues and Limitations
Important Notes
- ALD is no longer used to install/upgrade grids and to import catalogs and applications. Taking the place of ALD is the Backbone Fabric Controller (BFC). BFC is a simple-to-use web-based GUI application that is used to create and manage all of your CA AppLogic grids within a single backbone. See the BFC documentation for how to download/install BFC and how to use it to manage your CA AppLogic grids. To import catalogs and applications into your grid (i.e., system_ms that is shipped with CA AppLogic), copy the catalog/application to your grid's impex volume and use the cat import and app import CA AppLogic commands.
- CA AppLogic 3.x now supports the VMware ESX hypervisor, in addition to Xen. While CA AppLogic 3.x maintains all of the features and functionality for both hypervisors, there are some important usage aspects that are specific to VMware ESX:
- ESX has a more restricted hardware compatibility list as compared to Xen. Be sure to verify that your servers are on the ESX hardware compatibility list before using CA AppLogic 3.x. The hardware choosen for your ESX-based grid must be verified against the HCLs for ESX, CA AppLogic-Xen (specified in the hardware compatibility section above) and CentOS 6.3.
- Heavily loaded grids that use ESX as compared to Xen operate slower for some operations. This is caused by the fact that CA AppLogic uses the ESX APIs to control virtual machines on each server; these APIs are slower on ESX as compared to using the Xen APIs. The most noticable operations where this slowness is observed are the following:
- Starting an application - appliances may take 1-2 minutes longer to boot on heavily loaded grids
- grid info, srv list, srv info - may take several seconds or 1-2 minutes longer on heavily loaded grids
- ESX has more memory overhead as compared to Xen. An application that fits on 2 servers on a Xen-based grid may not fit fully on the same 2 servers running on a CA AppLogic ESX-based grid. This is why it is recommended for the ESX-based servers to have at least 4-8 GB of memory.
- Appliances that support ESX:
- All appliances must have vmware-tools installed (by default, all appliances that ship with CA AppLogic 3.x have vmware-tools installed). vmware-tools is needed so that the graphical console will work correctly and so that the appliance will shutdown in a timely manner. If vmware-tools is not installed, the graphical console is much harder to use (mouse cursor will be hard to control) and the appliance will take 15 minutes to shutdown.
- When using Windows appliances, the appliance must have the correct esx_os_name setting in the virtualization options string that is stored in the appliance descriptor.You can update the setting for your windows appliance. If installing a new Windows appliance using iso2class, there is a mandatory command line parameter that is used to set the correct esx_os_name setting for the new appliance.
- To reboot an appliance, use either comp restart or restart the application. Rebooting the appliance from within the appliance itself will result in an appliance boot failure (the appliance won't be able to retrieve its configuration from CA AppLogic).
- The volfix configuration mode will not work for appliances that run on ESX. The appliance will need to be converted to use dhcp configuration mode. This should only affect older appliances from CA AppLogic 2.1 and older releases.You can convert your appliance to use the dhcp configuration mode .See the Appliance Developers Guide in the Appliance Kit section for how to convert your appliance to use the dhcp configuration mode.
- CA AppLogic 3.x introduces role-based access controls (RBAC). RBAC provides for the ability to grant permissions to or control over an object, such as application template, application instance, catalog or grid. By default, when a new user is created on a grid, this user has limited access to the grid's objects. For example, by default the user does not have login permissions to the grid. You need to configure appropriate access rights (User and Group Administration, Overview) for your users to access the grids.
- As of 3.7, Solaris and OpenSolaris are not supported and <icrosoft Windows 2003 server appliance creation is not supported. However these appliances will still operate correctly if migrated from an older grid.
Note: All the Solaris-based appliances have been removed from the catalog starting from CA AppLogic 3.7.
- CA AppLogic is OS agnostic and designed to be used with different operating systems. As part of its design, all volume operations, such as create/format, copy and manage, are executed within Filer applications. They are no longer executed by the grid controller. These new filer applications use resources on the grid identical to any other application. Therefore, there must be enough available resources on your grid to execute the volume operations. The filer applications are not used for raw volumes or block-level volume copies.
- As of 3.7 GA for Xen-based grids, the grid controller is assigned one full CPU core for its own exclusive use. CA AppLogic also reserves one CPU on every server (exclusive for dom0) with the exception of the primary server that is running the grid controller. On the primary server two CPUs are reserved for the grid controller and for dom0. CA AppLogic will not assign any appliances to run on the same CPU core as used by dom0 or the grid controller.
For ESX-based grids, the grid controller still uses 10% of a core and dom0 is assigned one non-exclusive CPU core, as in all previous releases.
- Because all volume operations are now executed using filer applications, all volume operations are slower as compared to previous releases. The filer applications have to be started or stopped as part of the volume operation. Typically, there is about 20 seconds of overhead for Linux-based volume operations and several minutes for Windows-based volume operations.
- Network bandwidth resource usage is enforced on all appliances. An appliance will not be able to use more than its configured bandwidth for all of its terminals. The assigned bandwidth takes all terminals into account. Verify that the configured bandwidth for your appliances and applications is appropriate according to your bandwidth usage needs. The maximum bandwidth per server is 2 Gbits or for a 10GE backbone network, the maximum bandwidth is 20 Gbits.
- For appliances, such as gateways, load balancers and port switches, that pass through network traffic, you must account for network traffic passing in and out of the appliance. This cuts the bandwidth in half. For example, a load balancer that is assigned 100M of bandwidth is actually limited to 50M. 50M for network traffic passing in the appliance and 50M for network traffic passing out of the appliance.
- Before accessing the GUI on a newly installed or upgraded grid, you should clear the browser's cache. If the browser's cache is not cleared, the GUI may not behave properly.
- The grid shell can be accessed either through a web browser or using an ssh client. For increased security, password-based ssh logins are not supported except during grid installation.
Important! We strongly recommend that you use the web shell provided with the CA AppLogic GUI.
- When accessing the grid over ssh, the login user name is always root, regardless of the CA AppLogic user name. For the purpose of ssh logins, users and their roles are uniquely identified by their public ssh keys.
- Web browser's Javascript and pop-ups must be enabled to use the web-based graphical user interface (dashboard, editor, documentation)
- Users are responsible for allocating, assigning and use of externally visible IP addresses for applications; CA AppLogic takes care of all internal network assignments
- While the Backbone Fabric Controller sets up all grid servers and controllers with carefully pre-configured firewalls and disables unnecessary network services, users and maintainers are encouraged to verify the security settings of their systems.
- Network performance between servers on the private network used for volume and inter-appliance communication is measured to approximately 900 Mbps. The TCP network performance measured between appliances residing on different servers is measured as 720-900 Mbps. When running Windows, the TCP network performance is about 700 Mbps and UDP is about 500-700 Mbps.
- Resource limits on appliance hardware resources are enforced differently for different types of resources (CPU, memory, bandwidth). CPU is "no less than" , memory is "exactly that much" (includes VM overhead), bandwidth is "exactly that much". CPU resources may be enforced to "exactly that much", using the new --cap_cpu option when starting the application.
- When starting an application with a specified amount of minimum CPU, it is not guaranteed that the application will get exactly the amount of specified CPU. For example, if an application is started with cpu=2, it is possible that the application will receive 1.97 CPU as observed by adding up all of the assigned CPU to all components of the application. This is due to rounding errors that may occur while trying to assign CPU to each individual component.
- When application start fails, not all messages related to the failure may be shown in the shell. Inspect the grid log for additional information, using the list log n=20 command.
- Grids in which linear scalability of performance is important should be built using servers that are as uniform as possible in CPU type/speed, memory size and disk capacity. CA AppLogic will work correctly in grids assembled from servers with different amounts of hardware resources; however, on such grids you may experience sub-linear performance.
- There is no user visibility during a grid controller restart due to a grid controller VM failure. If the grid controller VM fails and CA AppLogic restarts the grid controller VM, there is no user visibility while the controller is restarting. Typically it takes 1-2 minutes for the grid controller to restart on its own. If the grid controller is unavailable for more than 5 minutes, contact CA Support.
- Creation of an NTFS03 volume always results in an NTFS08 volume. NTFS08 volumes may be used with Windows 2003 Server.
- The net_discover command for grid and server is not supported on ESX-based grids/servers.
- When using a SAN with your CA AppLogic grids, ensure that there is at least 500GB of free space for every grid that uses the configured NFS share. For example, if the NFS share is to be used for five different grids, the share should have 2.5TB of free disk space.
- When using a SAN with your CA AppLogic grids, if the SAN or NFS share goes offline for any period of time, some of the volumes that were in use might get corrupted. If this corruption prevents the grid controller from running or causes applications to fail to start (or any other grid or application instability), contact CA Support immediately.
- To use CA AppLogic appliances based on the latest OS distributions (such as Fedora Core, Ubuntu, Debian, RedHat and CentOS), please use the latest APK versions that are distributed with CA AppLogic 3.7 or later. If you do not use the latest APK version from the 3.7 release, you must configure the field engineering code of 128 on the boundary of the appliances. This field engineering code instructs CA AppLogic to use a newer device name style for the appliance volumes that are used specifically by these newer distributions. If the field engineering code of 128 is not specified, appliances based upon these newer distributions will fail to start unless the latest APK version is used. We recommend that all appliances are updated with the latest APK release.
- Windows 2003 Server templates are no longer distributed with CA AppLogic. The Windows 2003 server OS will be supported but the templates are no longer maintained. It is recommended to use either Windows 2008 or 2012 Server instead of Windows 2003 Server.
- Solaris and OpenSolaris appliances are no longer supported (and will not be supported in subsequent releases).
- As of CA AppLogic 3.7, the WEB5 appliance remains in the system catalog for backward compatibility. However, this appliance may be removed from the system catalog in a subsequent release.
- As of CA AppLogic 3.7, the LampCluster template application is no longer distributed with the release and will not be maintained.
- Language pack hotfixes are now integrated into CA AppLogic. As such there is no need to install a hotfix to get support for a specific language (all languages are installed by default).
- If either your primary or replica BFC database is lost or corrupted, you may be able to recover it from an automatic backup that is always being run from the 3.1 version of the BFC. These backups actually live in a subdirectory of the primary database so they are not a substitute for configuring a replica. (These backups are also written to a subdirectory of the replica if you have one configured.) To restore from the most recent backup:
- Wait two minutes after all active grid operations complete to ensure the backup is up-to-date.
- Log in to the BFC system as root.
- Run /bin/db_restore (by default this will be /opt/bfc/bin/db_restore). The db_restore utility will stop the BFC (if it is still running), restore the database from the most recent backup, and then restart the BFC.
- After the restore you might find one or more of your grids in "Running, but needs attention" state. If you do, simply clear the failure on those grids before proceeding.
Copyright © 2013 CA Technologies.
All rights reserved.
 
|
|