Previous Topic: IN - Input gateway with firewall (iptables)Next Topic: INSSL - HTTP Gateway with SSL support


Filer_Solaris - Solaris Filer Appliance

Latest Version: 4.0.1-1

Filer_Solaris: Solaris Filer Appliance

At a Glance

Catalog

Filer

Category

Filers

User volumes

yes

Min. memory

256 MB

OS

Solaris

Constraints

no

Functional overview

Filer_Solaris is a filer appliance that provides filesystem-level access to a volume with a Solaris file system. Filer_Solaris supports the following file systems: ufssol and zfs and supports the following modes of operation:

format: format the volume to the specified filesystem (for example, execute mkfs)

fscopy: perform a filesystem-level copy from one volume to another, the destination volume is formatted prior to the copy

fsck: check the file system on the volume

fsrepair: check and repair the file system on the volume

manual: provide user-level access to the volume through both a Web GUI and root shell (through SSH)

In manual mode, Filer_Solaris provides GUI access and root shell to the volume through its default interface. In this mode, Filer_Solaris also optionally provides external network access for the user to copy files to and from the volume. Manual mode operation can be performed over one or two volumes.

The Filer appliances are used internally by AppLogic and should not be used in regular AppLogic applications.

Boundary

Resources

Resource

Minimum

Maximum

Default

CPU

0.05

0.05

0.05

Memory

256 MB

1 GB

512 MB

Bandwidth

1 Mbps

1 Mbps

1 Mbps

Terminals

Name

Dir

Protocol

Description

None

The external interface is enabled. It is used for incoming and outgoing traffic and its network settings are configured through properties. It is only used in the manual mode and is not configured in all other modes.

The default interface is enabled. It is used for maintenance. Also, in the manual mode, it is used for accessing the Web GUI.

User Volumes

Volume

Description

src

Source volume for filesystem-level volume copy or management of two volumes. Always mounted read-only except by the Windows03 filer.

dst

Volume that Filer_Solaris provides access to. All operations are executed on this volume. Mounted read-only in fsck mode and in manual mode if mount_mode property is ro, otherwise mounted read/write. Mandatory in all modes.

Properties

Property Name

Type

Description

mode

enum

Mode of operation for the filer. Valid values are: manual, format, fscopy, fsck, fsrepair. This property is mandatory.

fs_type_src

enum

File system on the src volume when two volumes are being managed. See fs_type_dst for valid values. This property is mandatory when two volumes are being managed; otherwise, it is ignored.

fs_type_dst

enum

File system on the dst volume. Depending on mode, it is either the file system currently on the dst volume or the file system to format on the dst volume. Valid values are: ufssol and zfs. This property is mandatory.

fs_options

string

Additional file system options used to format the dst volume, in options=val pairs. This property is file system specific and is valid only in the format or fscopy modes. See below for the options that are valid for each file system. Default: (empty)

mount_mode

enum

Mount mode of dst volume in manual operations. Valid values are: rw, ro, and none. A value of none causes the dst volume not to be mounted. Default: ro

ip_addr

ip_owned

Defines the IP address of the external interface in manual mode. If set to 0.0.0.0, the external interface is not used. Default: 0.0.0.0 (not used).

netmask

IP address

Defines the network mask of the external interface in manual mode. This property must be specified if ip_addr is specified. Default: 0.0.0.0.

gateway

IP address

Defines the default network gateway for the external interface in manual mode. It can be left blank only if the remote host is on the same subnet; must be specified otherwise. Default: (empty).

dns1

IP address

Defines the primary DNS server used in manual mode to resolve domain names. This allows the user to specify hostnames when uploading/downloading files to/from a volume. Default: 208.67.222.222 (OpenDNS.org address).

dns2

IP address

Defines the secondary DNS server, which will be used if the primary DNS server does not respond. Default: 208.67.220.220 (OpenDNS.org address).

vol_name_src

string

Name of the src volume being accessed by the filer when two volumes are being managed. Default: (empty)

vol_name_dst

string

Name of the dst volume being accessed by the filer. Default: (empty)

Operation Modes

The following table lists the supported mode for each of the supported file systems:

 

format

fscopy

fsck

fsrepair

manual

ufssol

yes

yes

yes

yes

yes

zfs

yes

yes

no

yes

yes

In manual mode:

For all file systems, but swap, the volume is mounted on /mnt/vol.

For a swap volume, the block device is accessible on /dev/hda4.

Filesystem Options

This section lists the file system options (as specified on fs_options) for each file system supported by Filer_Solaris.

ufssol

None

zfs

pool_name: name of the zpool to create on the dst volume. If omitted, the vol_name property value is used instead.

mountpoint: mountpoint of the root dataset of the created zpool. Valid values are: an absolute path, e.g. /mnt/mypool, legacy and none. Datasets with legacy mounts are not automatically managed by zfs but require entries in /etc/vfstab or manual mounting. Datasets with mountpoint of none are not mountable. Default: /pool_name.

autoreplace: controls automatic device replacement. If set to off device replacement must be manually initiated using zpool replace; if set to on any new device found in the same physical location is automatically formatted and replaced. Default: off.

delegation controls whether a non-privileged user is granted access based on permissions defined on datasets. Valid values are off and on. Default: on.

failmode: controls behavior in the event of failure. Valid values are wait, continue and panic. Default: wait.

version: zpool version. Valid values are 1-10. Default: 10 (current).

Interface

The Filer appliances provide an HTTP interface on their default interface to collect status on non-manual volume operations and to access the Web GUI when in manual mode. The following functions are available by URL:

/: interactive access to the dst volume through the Web GUI, only available in manual mode

/api/status: returns the status for the current volume operation, only available in non-manual mode

The format of the output is the following: [progress=W, ]poll=X, status=Y, errortxt=Z

progress: integer, 0..100, progress of the current operation. If progress cannot be reported, then the progress field is not returned. Progress is not reported for the following modes:

format for all file systems

fsck and fsrepair for all file systems, but ext2, ext3, ext3-snapshot, and ufssol

poll: integer, recommended status poll interval, in seconds.

status: integer, status of the volume operation. See below for the list of statuses that can be returned by Filer_Solaris.

errortxt: string, error message, if an error occurred (e.g., non-zero status)

The following is the list of statuses that Filer_Solaris can return in the status field for a specific volume operation:

0 - success

100 - operation failed

101 - operation not supported

102 - operation not implemented

103 - operation canceled

104 - I/O error

200 - no space left on volume

201 - file system errors detected

300 - out of memory

400 - pending

Web GUI

The Filer appliances use a web-based file manager named eXtplorer to provide the Web GUI access to a volume (accessible only in manual mode). eXtplorer is released under the GNU GENERAL PUBLIC LICENSE Version 2. The version of eXtplorer used in the filers have been modified. The following are the changes to eXtplorer:

  1. Removed the login.
  2. Updated eXtplorer not to display its own files.
  3. Changed the file list to show the target for all links under the "Type" column.
  4. Changed the tooltip generated when the mouse is over a directory in the directory list to show the symlink target if the directory is a symlink.
  5. Changed symlink creation through the GUI to support orphaned links.
  6. Changed delete file through the GUI to support deletion of symlinks.
  7. Added an interface for editing the volume base path for any available volume.
  8. Changed the generation of file & directory lists to support links.
  9. Resolve relative & absolute links which include '..'.
  10. Add UI for chgrp/chown, allowing numeric entries only.
  11. Add owner/group to the file display.

The eXtplorer licenses and the source to the original un-modified eXtplorer can be found on the Filer appliances in /mnt/monitor/.volume_browser/LICENSES/.

ZFS Implementation Specifics

Filer_Solaris supports zfs pools containing a single virtual device to allow users access to zfs volumes in the same manner as volumes using other file systems such as ufssol. More complex pools using multiple devices can be created manually using raw volumes within an AppLogic appliance, but such volumes cannot be used with Filer_Solaris. ZFS filer operations are constrained to the following behaviors.

Pools are created using the altroot property. As a result the mountpoint of the root dataset must be explicitly set, rather than defaulting to the pool name. This is due to a bug in the current zpool command which sets the default mountpoint to /altroot rather than /altroot/pool_name.

fsrepair executes zpool scrub and returns a single-line of status on completion; either success or failure. However, zpool scrub can be executed live on any pool within a running appliance and displays much more information in the event of a problem.

fscopy supports only file system datasets (volume, snapshot and clone datasets are not copied). Administrative permissions are not preserved by fscopy.

While the zpool version can be set with fs_options on create, the zfs version of the root dataset is 2, which is not backwards compatible with version 1. Solaris 10 appliances use zfs version 1. To use zfs pools with Solaris 10 appliances, create the pools manually from raw volumes rather than using Filer_Solaris.

The Solaris filer does not support root zpools (zfs boot volumes). There is a bug in OpenSolaris 2008.05 which renders a root zpool un-bootable once it has been imported into another Solaris OS. OpenSolaris 2008.11 does not allow import of a bootable zpool at all.

Typical Usage

The following sections describe the configuration of Filer_Solaris in several typical use cases:

formatting a volume

Example:

Property Name

Value

Description

mode

format

format volume

fs_type_dst

ufssol

format volume with Solaris UFS

Filer_Solaris executes mkfs over the dst volume, specifying a filesystem type of ufs.

filesystem-level volume copy

Example:

Property Name

Value

Description

mode

fscopy

filesystem-level copy

fs_type_dst

ufssol

format destination volume with Solaris UFS

Filer_Solaris formats the dst volume to ufs with mkfs. It then mounts the src volume read-only and mounts the dst volume read/write. Finally, Filer_Solaris copies the contents of the src volume to the dst volume using cp and unmounts both volumes.

file system check

Example:

Property Name

Value

Description

mode

fsck

file system check

fs_type_dst

ufssol

volume to be checked has Solaris UFS

Filer_Solaris executes fsck on the dst volume.

file system check with repair

Example:

Property Name

Value

Description

mode

fsrepair

file system check with repair

fs_type_dst

ufssol

volume to be checked and repaired has Solaris UFS

Filer_Solaris executes fsck with the repair option on the dst volume.

user-level access to volume

Example:

Property Name

Value

Description

mode

manual

provide user-level access to volume

fs_type_dst

ufssol

volume has Solaris UFS

mount_mode

rw

read/write access to the volume

ip_addr

192.168.123.100

IP address for external interface

netmask

255.255.255.0

netmask for external interface

gateway

192.168.123.1

gateway for external interface

dns1

192.168.123.254

DNS server

Filer_Solaris mounts the dst volume read/write at /mnt/vol. It then starts the eXtplorer GUI and starts sshd, which gives the user root access to the volume. The GUI is accessible through the default interface and any file transfers to/from the volume is through the external interface.

Notes

the Solaris Filer is based on OpenSolaris Build 2008.05

in non-manual mode, there is no SSH or GUI access

3rd party open source software used inside of the appliance

Filer_Solaris is based on OSOL. A number of packages have been removed from the base class to build Filer_Solaris; Filer_Solaris uses the following 3rd party/open source packages in addition to the 3rd party/open source packages used by its base class OSOL.

Software

Version

Modified

License

apache

2.2.8-1

Yes

Apache 2.0

php

5.2.6-1

Yes

PHP v3.01

eXtplorer

2.0.0_RC1-15

Yes

GPLv2

cpio

2.10-1

Yes

GPLv3