Previous Topic: Explanation of High Availability in AppLogiNext Topic: Filer Status Internal Condition (SCR2301)


Frequently Asked Questions - Customer Support

Q: "I am trying to install a 3.1 grid on HP hardware and it says it can't find any disks"

A: You must download and apply HF6200 to get the drivers for the HP smart array G6 controller

Q: "I have hardware that is listed as compatible with the AppLogic HCL but it is shown as not compatible with the Xen HCL what do I do?"

A: AppLogic HCL is based off of the linux kernel 3.0 HCL, not Xens HCL, you may proceed to test your hardware whether it is listed as supported.

Q:"My installation fails with "Cannot reach the metering gateway at grm.3tera.net"

A: Verify that your network allows outbound port 22 ssh access as this is the network protocol used to transmit data to our metering server. If you cannot get outbound port 22 access and you are forced through a corporate proxy then please download and install the metering gateway application from download.3tera.net to transmitt your data over https as an alternative.

Q:"How can I connect to a given appliance from the dom0 computer if I have problems accessing it from AppLogic ?"

A: On the dom0 computer that the appliance is running in (see server list --map to find out) do xm list: that will produce a list of applications running on that dom0 computer. Choose the number for the right one and run xm console <num>. That will give you access to the console for that application

Q: What is the right process to change the Windows appliance name?

A: There are two approaches

  1. graphically login Windows box, enter Cygwin, add APK_HOSTNAME_UPDATE=yes to /etc/sysconfig/applogic_init(or remove the /etc/sysconfig/applogic_init), stop the Windows appliance. afterward, change the appliance instance name , and start up the appliance again. Please refer to “Computer Name” section in the below document for more details.

    http://doc.3tera.com/AppLogic31/en/Developer_Guide/index.htm?toc.htm?1537823.html

    Instance name can be found in "attribute" tab of appliance. please refer to attached screenshot for more details

  2. 2.Login controller, run ‘3t util wincfg name=<app_name:comp_name> computer_name=<new name>”. it does not require to restart the Windows application. Please refer to below document for more details

    http://doc.3tera.com/AppLogic31/en/Cli_Ref/index.htm?toc.htm?RefWinCfg.html

    Both of them eventually invokes following command to update the Windows box name. The name is permanently changed and will not roll back after restart.

    wmic computersystem where name=<old host name> rename name=<new host name>
    

In the meanwhile, they also have some differences

There is a known for changing Windows name in Applogic 2.x and 3.x release. After renaming Windows hostname, application would enter maintenance mode and fail in the next restart. Start it again would bring application online.

Q: what is the difference of managed and unmanaged appliance

A: In brief, the managed appliance is the appliance which can be configured and managed by the controller.

From user perspective, managed appliance has following capabilities but unmanaged appliance does not.

  1. All properties defined in Applogic editor take effect after appliance is started up. E.g. hostname, ip address, etc.
  2. Support full HA because controller is aware of appliance runtime state and restart it once it’s crashed.
  3. Automatically mount the user volume (volumes other than boot volume)

The managed appliance has such capability because it has APK(appliance kit). APK is a set of scripts and binaries injected into appliance to dynamically configure the Linux/Windows box at applaince startup stage and communicate with controller after appliance is completely started up.

All the appliances shipped by Applogic are managed appliances with APK installed.

Regarding the Linux/ Windows boxes imported using iso2class and hvm2pv, APK are installed at different stage.

For Linux, APK is installed automatically by hvm2pv. In other words, if you select not to run hvm2pv in the middle of import issued by iso2class, the Linux appliance will become a unmanaged appliance.

For Windows, the APK is automatically installed by server_windows-xxx.msi for appliance, vds_windows-xxx.msi for VDS. It’s not needed to run hvm2pv for Windows appliance.

APK file list and behavior are not same on different platform. APK install packages can be found at /usr/local/applogic/download/apk*.tar.gz on the controller. If you would like know which files are installed by APK, you may open them to check it.

Q: What is the best practice to clean unused volume and stream?

A: "3t vol clean --status --unused" displays all the unused volumes that don't belong to any known entity (application, class, etc) as well as unused volume streams. "3t vol clean --unused" is used to destroy all unused volume and streams. "3t help vol clean" has detailed explanation how to use vol clean.

As the safe measure, we suggest to perform following check before clean the unused volume and streams

  1. Run "3t vol list --all" to verify that all volumes are in a good and synced status. If there is degraded volume, repair volume first.
  2. Verify no existing vol repairs are running using "3t vol repair --status", suspend the auto volume repair using "3t vol repair --suspend" until the volume clean is completed. "3t vol repair --resume" is used to resume the auto volume reapir.

There is an approach to cross check which application the volume or stream belongs to: run "vol list --all" to list all volumes in the grid, or "vol list server=srvX" to list all volumes that have at least one mirror on the specified server srvX , then run "vol info <volume name> --batch" one by one to display the associated stream, for instance, in the below sample, stream v-dc5f7240-e684-4a81-8dde-3b1d6fd9956d(displayed in the line of "mirror") is associated to volume LINUX5.boot of application named test_vds If stream displayed by "vol clean --status --unused" is not in the output of all "vol info <volume name> --batch", it means the stream is the unused

3t vol info test_vds:VDS_CENTOS55.boot --batch
stream and safe to be cleaned.
volume boot
   {
   name            = "test_vds:LINUX5.boot"
   link            = ""
   comment         = ""
   uuid            = "55375cf3-c28e-4b12-8c71-d8974d74a5ab"
   size            = 2147483648
   state           = ok
   filesystem      = ext3
   partitions
      {
      partition par1 : size=2147450880, fs=ext3
      }
   unused_space    = 0
   mount_state     = available
   mount_path      = none
   n_users         = 0
   time_created    = 1335102818
   time_written    = 1335102883
   time_accessed   = 1337120274
   n_mirrors       = 1
   mirror srv1.v-dc5f7240-e684-4a81-8dde-3b1d6fd9956d : server = srv1,
state = ok
   }
Q: I need to boot my appliance from DVD in my Xen grid. How can I do it ?

A: To do this, follow this procedure:

  1. Edit the appliance. Highlight it and choose Modify Boundary.
  2. In Modify Boundary choose Volumes. Then choose Add to add a new one: call it DVD and choose type placeholder and then check the radio to make it a boot volume
  3. Go out of Modify Boundary and Choose User Volumes, also with the Appliance highlighted
  4. In User volumes you will see the DVD volume. In the App_volume column choose iso_volume1, which is the DVD for the appliance you want to boot from
  5. Save the application. Do not close the editor. Start the appliance from the Applications menu
  6. While it is starting in the Editor choose the Graphical console.
Q: My linux appliance does not boot. On the graphical console I can see GRUB __. What may be happening and how can I solve it ?

A: It looks like GRUB is not correctly installed in the MBR of the appliance you want to boot, as it is not going past stage 1 of the GRUB booting process. In this case the easiest thing is to boot the appliance from DVD or mount it with vol manage, then rewrite grub to the MBR.

The procedure for booting from DVD has been described in other entries in this FAQ.

As for installing on a managed volume, the procedure is as follows:

  1. Do a vol manage of the boot volume you need to have MBR rewritten to (e.g. vol manage myapp:main.boot). After doing this the volume may be mounted under /mnt/vol/par1. Let's imagine the volume you want to manage is /dev/sdc1 (do fdisk -l if you have a doubt):
    mount /dev/sdc1 /mnt/vol/par1
    

Still your proc and dev will not reflect those of the volume where you want to reinstall the grub to the MBR, so you will need to mount them

  1. Run
    mount -t proc none /mnt/vol/par1/proc
    mount -o bind /dev /mnt/vol/par1/dev
    chroot /mnt/vol/par1/    /bin/bash
    
  2. 3. Now we are going to install grub. Do
        grub
       In the grub prompt run
      grub> find /boot/grub/stage1
      Let's imagine we are getting
       (hd0,0)
       (hd1,0)
     and the grub is installed on the second disk, so next
      grub> root (hd1,0)
     and finally
       grub> setup  (hd1)
    

with this grub will be reinstalled

Q: I believe I have stuck volumes in one of my nodes, as the application does not start and I see errors in /var/log/messages in the controller. How can I find out ?

A: Apart from the controller logs, the first place to look is in the /var/log/messages of each node. Applogic assigns data streams to hoop devices and creates mirrored md devices with local hoop devices and nbd devices shared from another node for purposes of redundancy. Hence each md device must have at least one nbd or hoop device attached, each nbd device must have the address and port of the remote server it is pulling the data from, and each hoop device must have a data stream attached and must be shared with the remote servers and have a port assigned. If any of these gets amiss, there are chances there is going to be a problem.

To see if this is the case you can run the following command in the different node(s):

3tsrv bd list --all

This is going to give you the list of md, hoop and nbd devices on that node. The list should look like the following:

--- md Devices ---
Name       Attached Devices
----------------------------------
md1        hoop0, nbd2
md2        hoop1, nbd4
md3        hoop2, nbd6
--- hoop Devices ---
Name       Volume                                        Shared    Port
------------------------------------------------------------------------
hoop0      v-ctl-boot                                    Y         63001
hoop1      v-ctl-meta                                    Y         63002
hoop2      v-ctl-impex                                   Y         63003
--- nbd Devices ---
Name       Remote IP            Remote Port
-------------------------------------------
nbd2       192.168.2.2          63002
nbd4       192.168.2.2          63001
nbd6       192.168.2.2          63003
If, for instance, you see entries  like
hoop1      v-ctl-meta                                    N/A  N/A
nbd6      192.168.2.2          N/A
md3  hoop6, nbd7 (where hoop6 and/or nbd7 do not exist)

or any other anomaly, there may be a problem that you should report to technical support.

Q: How to re-initialize the DB replication?

A: Follow these steps:

  1. SSH your BFC and then log the session as bfcadmin or run switch to user bfcadmin using "su - bfcadmin", stop any replica already running with
    /opt/bfc/bin/stop_replication
    
  2. Tar the replica folder, then remove all files under it. The replica folder can be found in "BFC database" tab of "select backbone idenity" link in the left panel.
  3. Stop and and start the BFC service (you will need to log back as root user)
    service bfc stop
    service bfc start
    
  4. Log into the BFC GUI and set again your replica path.

You should see all the files created again into the destination folder.

NOTE: It may be not necessarely your case but you may check this before to progress. In my lab I was reproducing your issue in the BFC replica fails because a problem with the server hostname. The BFC uses the call "hostname -s" when you set a replica.If the outpud of "hostname -s" fails, the replica will not work.

You may check if running that call at the prompt you are able to output the hostname of the BFC. If not you need to make necessary correction into the /etc/hosts file (let me know if that is the case and I'll assist you with that).

Q: How to verify if hf6099 has been installed successfully on Applogic 3.x

A: The verificaiton includes following steps

Run "3t grid info" on controller. the hf6099 should be displayed in the line of "AppLogic Version" if it's installed successfully. If your grid is 3.x, grid version and applied hotfixes is displayed in BFC GUI

On the physical node, open /boot/grub/grub.conf, it should has entry with "iommu=off" if hf6099 has been applied successfully

Also on physical node, you may see /boot/grub/hf6099-save-grub.conf which is the original grub.conf saved by Applogic before applying hf6099. It's usually created but in rare cases may not created.

Please keep in mind that hf6099 requires the reboot. Please refer to following page for details.

http://doc.3tera.com/AppLogic30/en/Release_Notes/index.htm?toc.htm?Hotfixhf6099.html
Q: What is best practice to swing controller to another physical node

A: If controller is accessible, you may run below 3t shell command to make sure make sure controller volume is good state. otherwise, execute "vol repair <volume name> --force" to repair controller volumes

             vol info _SYSTEM:boot
             vol info _SYSTEM:meta
             vol info _SYSTEM:impex
   afterward, issue below command to promote secondary server to new primary server. This command will automatically restart controller on new primary server.
               server set srv2 role=primary
   
  In the above sample, srv2 is the new primary server you would like controller to start up. Please make sure target node has secondary role before swing the controller.
 
    if controller is in stuck and you would like to recover it, please follow the below procedure
     1. ssh into any physical node and run '3tsrv sd get", it will display server role and where is good volume streams of controller. 
               [root@srv1 ~]# 3tsrv sd get
               cluster
                   {
                   signature = "S20120503194810614887013849227"
                   }
                volume boot               --> streams of controller boot volume 
                   {
                                  mirrors
                                  {
                                  mirror v-ctl-boot: server = srv1, synced = 0                                                              --> synced = 0  means boot volume mirror on srv1 is in bad state,
                                   mirror v-5eb2f325-7010-44f9-b72e-93964a6d17ef: server = srv2, synced = 1       --> synced = 1  means boot volume mirror on srv2 is good, can be used for controller recovery
                                   }
                   }
                volume meta             --> mirror of controller boot meta
                 {
                                  mirrors
                                  {
                                  mirror v-ctl-meta: server = srv1, synced = 1
                                  mirror v-d752ff49-bbe0-43b9-a674-1d1547ec46ab: server = srv2, synced = 1
                                  }
                   }
                volume impex             --> mirror of controller boot impex
                   {
                                  mirrors
                                  {
                                  mirror v-ctl-impex: server = srv1, synced = 1
                                  mirror v-b024c865-bca4-4c78-9e76-1325187a5ceb: server = srv2, synced = 1
                                  }
                   }
                server srv1: ha_role = primary                                       --> srv1 is current primary
                 server srv2: ha_role = secondary                                   --> srv2 is one of secondary server
                 server srv3: ha_role = secondary                                   --> srv3 is the other secondary server
  1. Make sure each controller volume(boot, meta and impex) has at least one good mirror(synced=1) and the node hosting good mirror must be functional in the following recovery.
  2. Login all primary/secondary server, execute "service heartbeat stop" to stop heartbeat service. it may take several minutes. When it's executed on primary server, it shall shut down controller. If controller is still running after heartbeat service is stopped, execute "xm destory controller" on primary to shutdown controller.
  3. Ping controller private and public ip to make sure controller is really down and its ip is not being used. If controller public ip or private ip is still pingable, need to figure out which node is hosting the ip .In the following example, "192.168.9.{1..10}" stands for go through all 10 node of grid(grid id:9).
    for i in 192.168.9.{1..10}; do echo "=================$i================="; ssh root@$i "ip add show | grep <controller private or public ip>"; done
    
  4. Then execute "ip addr del <contorller private or pubbloic ip. e.g. 192.168.1.254/32> dev <xenbr0/xenbr1>" to clean the orphan controller ip.
  5. Ssh into the node which you would like to restart controller on, for instance, srv2 in this sample. Please keep in mind It must has secondary server role.
  6. Run "3tsrv set role=primary --recover". this command promotes current node from secondary server role to primary server role and restart controller on it. It usually take 10-20 minutes to start up controller. During this period, it will repair controller volumes of any of them are degraded which need additional time.
Q: What is best practice to remove a physical node from grid

A: Follow these steps:

  1. Disable the physical node either from BFC GUI or by 3t shell.
  2. Restart the applications running disabled node. The applications will be allocated to new available nodes(enabled nodes)."3t srv list --map" can display which appliance is running on which node.
  3. Manually migrate the volume stream using "3t vol migrate" command. "3t vol migrate --all" is used to migrate all volume streams from disabled node to other nodes in the grid. Applogic detects which nodes in grid have sufficient disk space and process the volume stream relocation .

    If you would like to know wether remaining nodes in grid has enough disk space, please run "3t vol list server=<disabled server>" which displays all volume streams stored on disabled server as well as size, afterward, run "3t srv list --verbose" to display all nodes' server resources including free disk space. In the 3t shell session, "help vol migrate" can display detailed help of this command.

  4. Run "3t srv list --map" to make sure not no application is running on disabled node. In addition, run "3t vol list server=<disabled server>" to make sure no stream is on stored disabled node. Alternatively, you may run "3t vol list --all" to see if all volume has at least one good stream and hosted by available nodes in the grid.
  5. Reduce minimum/target server number by 1 on BFC GUI. If number of nodes in grid is more than minimum server number value even after removing the node, it's fine not to change minimume server number
  6. Delete or quarantine the physical node from grid

Tips to for verifying phyiscal node network connectivity

Tip1 : How to require network layout

Ssh to physical node and open /etc/applogic.d/network.conf. Physical node detects the network layout where the node is connected and output result to /etc/applogic.d/network.conf

#following 2 lines are backbone and external switches that node are connected, it has switch id, model and enabled protocol. Each switch is displayed in one new line. 
switch a: id="ff:00:00:00:00:01", id_method="none", model="unknown"   
switch b: id="54:75:d0:19:59:80:0000", id_method="stp", model="54:75:d0"
#switches role: backbone switch or external switch, in addition, it also has lan name(l1 and l2) are used to find out which NIC of node are connected to which switch
lan l1: switches="a", role="backbone"
lan l2: switches="b", role="external"
#from the information as follows, we can see node has 2 NIC,  eth0 is connected to backbone switch over lan l1, and its mac address is "18:03:73:f5:98:df"; eth1 is connected to external switch over lan l2 and its mac address is "18:03:73:f5:98:e1"
interface eth0: lan=l1, mac="18:03:73:f5:98:df", switch=a
interface eth1: lan=l2, mac="18:03:73:f5:98:e1", switch=b

Note:

At minim two switches and NICs(one is for backbone, the other is for external) should be displayed in file if HA is configured , you may see 4 switches and NICs.

if the NIC interface name is displayed in the format of “__tmpxxxxx” instead of ethX, and it’s connected to backbone or external switch as follows, it would break the connectivity because bondX can only be created upon ethX.

interface __tmp12345678: lan=l2, mac="18:03:73:f5:98:e1", switch=b

This problem is caused by node OS(CentOS) bug in which OS has trouble to recognize and assign a normal name(ethX) in the NIC in limited timeframe.

The solution for this case is to manually configure the /etc/sysconfig/network-scripts/ifcfg-ethX and define device name, mac address, ip address, etc in it.

Tip2: How to check cabling status

Run “ip link“ and it output network link status. If any ethX link status is “NO-CARRIER”, it means it has cabling issue. In the following sample, eth1 is cabled properly but eth2 has cabling issue.

11: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond1 qlen 1000
    link/ether 00:21:9b:a5:3d:59 brd ff:ff:ff:ff:ff:ff
12: eth2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:21:9b:a5:3d:5b brd ff:ff:ff:ff:ff:ff
Tip3: How to check bond device status

AppLogic Bonding deivce is designed for HA scenario. The bond device is actually a virtual network interface which consists of one or more physical network interface(ethX) in active-passive mode. If active ethX is down, bond swing to the passive network interface

 “3tbond list”  can display the all bond device in physical node, and “3tbond info bondX” can displayed more details of specific bond device
 #3tbond list
Name      State       NICs
-----------------------------------------------------
bond0     ok          eth0
bond1     ok          eth1

#3tbond info bond0
Name : bond0
State: ok
Mode : active-passive
NICs :
   Name       Connected   Active
   eth0       up           yes

Note:

  1. bond0 is always connected to the backbone network, bond1 is connected to external network
  2. bondX are dynamically created when node is bootup
  3. in HA scenario, bondX are always recreated and each bondX should consists of two ethX(one is active, the other is passive) if both ethX link state are good. In case only one ethX work properly, bond0 are still be able to up.
  4. In non-HA scenario, bond creation rule is not same from version to version. For instance, in 3.1, only bondX is only created when NIC speed is 1GB/s, but other version does not have such restrict.
Tip4: How to check Xen bridge status

XenbrX is Xen bridge created for directing traffic between host OS/guest OS, and between various guest OS.

brctl show is usually used for check their status

In the following sample, xenbr01 is associated to bond0, and xenbr1 is associated bond1

# brctl show
bridge name      bridge id                STP enabled      interfaces
xenbr0           8000.180373f598df        no               bond0
xenbr1           8000.180373f598e1        no               bond1

Note:

  1. Xenbr01 is always connected to the backbone network, xenbr1 is connected to external network
  2. XenbrX could be created upon either bondX or ethX which depends on applogic version and NIC.
  3. If Xenbr0 has same MAC address as ethX connected to backbone, and Xenbr1 same MAC address as ethX connected to external
  4. if XenbrX are created successfully, ethX and bondX do not have IP, instead, XenbrX owns the IP address.
Q: What is best practice to download Applogic manually.

A: Follow these steps:

  1. Download applogic files from download.3tera.net to BFC
              rsync -rptgoL --progress -e "ssh -q -o StrictHostKeyChecking=no -i  download_key_file_full_path" applogic@download.3tera.net:applogic_version  local_location_of_download
    

    For example, if you would like to know to download applogic 3.1.14 to /opt/bfc/downloads/ of bfc, the download account private key file full path is /opt/bfc/fcdownloadkeyfile.pri, the command should be issued with following paramters

              rsync -rptgoL --progress -e "ssh -q -o StrictHostKeyChecking=no -i /opt/bfc/fcdownloadkeyfile.pri" applogic@download.3tera.net:3.1.14 /opt/bfc/downloads/
    

Note: If you ever import applogic from specific location in the past, please use the same location as current download location because BFC has cache of it and automatically direct to that folder when importing it later.

  1. Change download directory and subordinate's owner to account bfcadmin and group bfc
              chown -R bfcadmin:bfc local_location_of_download
    

    In this example, the command is be like below

              chown -R bfcadmin:bfc /opt/bfc/downloads/3.1.14
    
  2. Set 755 to download directory
              chmod 755 local_location_of_download
    

    In this example,the command is be like below

              chmod 755 /opt/bfc/downloads/3.1.14
    

If BFC displays error messages, it's usually because download directory owner or permission is not set correctly, please verify and repeat step 2 and 3

Q: What is max disk size that that Applogic physical node(dom0) supports?

A: Currently, the total disk size supported by Applogic single physical node(dom0) is 18TB, each hard disk must be no bigger than 2TB.

From 2.9 to latest version, Applogic only supports hard disk in MBR scheme which has capability to locate data within 2TB space. As a result it, every hard disk attached to Applogic phyiscal node(dom0) must be equal or less than 2TB.

To break 2TB limit, the hard disk must be formatted in GPT scheme, but Applogic does not support it at this moment.

At present, Applogic physical node(dom0) is 32 bit kernel and it creates single logic volume(LV) in lvm2 scheme to accommodate all user data(it's mounted as /var/applogic). The max single volume size is 18TB in such scenario, therefore, Applogic single physical node(dom0) supports up to 18TB disk size.

reference link

http://en.wikipedia.org/wiki/Master_boot_record#cite_note-1

http://tldp.org/HOWTO/LVM-HOWTO/lvm2faq.html

Q: How to migrate controller volume between node hard disk and san/nfs

A: login one of physical node, execute "3tsrv migrate"(syntax: vol migrate <ctl vol> | --all store=<store>). if parameter "store" is "local", volume is migrated to node local hard disk, if it's "san", volume is migrated to san. During the migration, controller is stopped automatically.

Q: How to migrate a regular app volume between node hard disk and san/nfs

A: in 3t shell, execute “vol migrate” with parameter “store=<store>”. if parameter "store" is "local", volume is migrated to node local hard disk, if it's "san", volume is migrated to san. During the migration, app is stopped automatically.

Q: How to specify the volume location when app is created or provisioned

A: In 3.5, it’s supported by 3t shell CLI only. please execute one of following 3t shell command suitable for your situation with parameter “store=<store>”. if parameter "store" is "local", volume is created on node local hard disk, if it's "san", volume is created on san

      app start
      app build
      app provision
Q: I have enabled the Applogic version auto download in BFC GUI. How to know the download status?

A: In all current BFC version, there is no progress bar to display auto download status.

AppLogic version auto download feature utilize rsync to download all avaiable versioins from download.3tera to the download directory specified in BFC GUI.

Here is the sample of download Applogic 3.1.14 to /opt/bfc/downloads of BFC

/usr/bin/rsync -rptgoL -e "ssh -q -o StrictHostKeyChecking=no -i /opt/bfc/fcdownloadkeyfile.pri" applogic@download.3tera.net:3.1.14 /opt/bfc/downloads

In addition, the download process is output to /opt/bfc/logs/syncApplogicVersions.log as the follows. If rsync fails, Applogic will print rsync error code.

2012-12-18 02:13:02,649|root|INFO|rsyncing versions [ '3.1.14' ]
2012-12-18 02:13:02,649|root|INFO|/usr/bin/rsync -rptgoL -e "ssh -q -o StrictHostKeyChecking=no -i /opt/bfc/fcdownloadkeyfile.pri" applogic@download.3tera.net:3.1.14 /opt/bfc/downloads
2012-12-18 02:13:03,897|root|INFO|

Therefore, you may either execute "ps -ef | grep rsync" on BFC to check rsync process, or check /opt/bfc/logs/syncApplogicVersions.log to see if Applogic version auto download process is running

Q: How to know Applogic version import progress?

A: In all current BFC version, there is no progress bar to display Applogic version import progress.

Applogic version import feature utilize rsync to copy and extract the downloaded Applogic version from download directory to the import repository directory /opt/bfc/applogic_version/<applogic version number>

Here is the sample of import Applogic 3.5.19 from download location /opt/bfc/downloads to import repository directory /opt/bfc/applogic_versions/

rsync -av --copy-links --delete --bwlimit=21000 /opt/bfc/downloads/3.5.19 /opt/bfc/applogic_versions

In addition, the import process is output to /opt/bfc/logs/ContainerX_python.log(X can be either 0 or 1) as the follows.

2012-10-08 18:44:10,527|grid_service_driver|DEBUG|:1257 - importGridVersion(), version: 3391 (3.5.19)
2012-10-08 18:44:10,531|grid_service_driver|DEBUG|:1257 - importGridVersion(), version: 3391, connection: 3414
2012-10-08 18:44:10,536|grid_version_resource_driver|DEBUG|:3391 - ready()
2012-10-08 18:44:10,540|fs_storage_resource_driver|DEBUG|:3384 - ready()
2012-10-08 18:44:10,588|fs_storage_resource_driver|DEBUG|:3384 - start()
2012-10-08 18:44:10,590|fs_storage_resource_driver|DEBUG|:3384 - getStorageBaseDir(), rootdir: /opt/bfc/applogic_versions
2012-10-08 18:44:10,590|fs_storage_resource_driver|DEBUG|:3384 - copyBits(), root: /opt/bfc/downloads, sources: ['3.5.19'], dest:/opt/bfc/applogic_versions
2012-10-08 18:44:10,613|fs_storage_resource_driver|DEBUG|:3384 - dir_size(), dir: /opt/bfc/downloads/3.5.19, size(kb): 18240704
2012-10-08 18:44:10,614|fs_storage_resource_driver|DEBUG|:3384 - check_fs_space(), sourceSpaceKb: 18240704, targetSpaceKb: 46498124
2012-10-08 18:44:10,615|fs_storage_resource_driver|DEBUG|:3384 - pathtosrc: /opt/bfc/downloads/3.5.19
pathtodest: /opt/bfc/applogic_versions/3.5.19
destdir: /opt/bfc/applogic_versions
2012-10-08 18:44:10,629|fs_storage_resource_driver|DEBUG|:3384 - copyBits(), rsync command: ['rsync', '-av', '--copy-links', '--delete', '--bwlimit=21000', '/opt/bfc/downloads/3.5.19', '/opt/bfc/applogic_versions']
2012-10-08 18:44:52,691|download_resource_driver|DEBUG|:1267 - verifying download directory contents

Therefore, you may use below approach to see the import progress

  1. Execute "ps -ef | grep rsync" on BFC to check rsync process
  2. Monitor rsync traces in /opt/bfc/logs/ContainerX_python.log(X is either 0 or 1)
  3. Enter import repository directory, for instance, /opt/bfc/applogic_versions/3.5.19, execute "du ./" to see whether the directory size is increasing.