This article introduces the grid node boot up routine in Applogic 3.x
The entrance point is /etc/rc.d/init.d/applogic. It's is launched when node is up and porcess following functions.
Major functionalities
Details
/usr/local/apl-srv/bin/net_discover.sh -> /usr/local/apl-srv/bin/nwd.pl perform network layout discovery
/usr/local/bin/3tbond info bondX get bondX information /usr/local/bin/3tbond destroy bondX destroy bondX /usr/local/bin/3tbond create bond0 nics=eth0,eth1 create bond0 on eth0 and eth1 /usr/local/bin/3tbond create bond1 nics=eth2,eth3 create bond1 on eth2 and eth3
/var/local/applogic/ha_backbone.* /var/local/applogic/ha_external.*
/etc/rc.d/init.d/xencommons start /etc/rc.d/init.d/xend start
/usr/local/apl-srv/bin/xenbr.sh
The xenbr.sh invokes following commands to attach bonding device to Xen bridge interface
brctl show show all Xen bridge interface brctl stp xenbrX off turn off stp on Xen bridge interface xenbrX brctl addif xenbrX bondX attach the bondX to Xen bridge interface xenbrX brctl delif xenbrX bondX remove bondX from Xen bridge interface xenbrX
Following scripts are invoked
/usr/local/apl-srv bin/fill_template.pl /usr/local/vrm/scripts/vrmcfggen.sh
/usr/local/vrm/scripts/vrm_ctl.sh - - > /usr/local/vrm/vrmd launch the vrmd | -> /usr/local/vrm/smnd.sh monitors VRM and reboots the server if VRM crashes or unloads. | --> /usr/local/vrm/scripts/srv_reboot_main.sh reboots the server | -> /usr/local/vrm/scripts/srv_get_infod.sh get server hardware and vm info if hypervisor is VMware | -> /usr/local/vnp/vnp_load load vnp driver(vnp.ko) and passed a few ethX/bondX/xenbrX to it.
/etc/rc.d/init.d/recovery_master
This script perform following operation
/usr/local/apl-srv /bin/ctlb_ctl2.sh -- > /usr/local/apl-srv bin/fill_template.pl create ctlb configuration file(/var/applogic/boot/ctlb.conf) |-> /usr/local/apl-srv/bin/ctlb_prep.sh preparation before controller startup: create metadata of controller volumes; create cluster descriptor; set recovery GUI password |-> /usr/local/apl-srv/bin/ctlb_conn_mon.sh connect to all physical node which has controller volume stream |-> /usr/local/ctlb/ctlb ?????? |-> /usr/local/apl-srv/bin/ctl_vm_ctl.sh startup controller VM and verify by wget controller hpptd url.
/usr/local/apl-srv/bin/ntp_ctl.sh start
/usr/bin/sdparm -s WCE=1 /dev/sdaX
/var/local/applogic/ncq.op turn on or turn off NCQ at node startup stage /var/local/applogic/ncq.sda NCQ queue depth /usr/local/apl-srv/bin/ncq based on above 2 files, it perform NCQ configuration by echoing NCQ queue depth to /sys/block/sdX/device/queue_depth
/usr/local/applogic/bin/3tsmartcfg create SMART configuration file(/etc/smartd.conf)
ipmi_msghandler.ko ipmi_devintf.ko ipmi_si.ko
/usr/local/apl-srv/bin/nfsctl.sh
Used Env files
/etc/applogic.env /usr/local/vrm/scripts/vrm_sc_defs.sh
|
Copyright © 2013 CA Technologies.
All rights reserved.
|
|