Saturday, January 24, 2015

Install & Configure kernel zone on Solaris 11.2


In Oracle Solaris 11.2, we have new type of zone called kernel zone. This zone is almost similar to SPARC VM (LDOM) guests which can run on its own patch level and completely isolated from global zone. These kernel branded zones are support on both SPARC & X86 hardware. But processors should support virtualization technology (VT) .In X86 hardware, you have to enable this option in system BIOS, if your hardware is supported for VT. Let's see how we can configure and install kernel zones on Solaris 11.2 .
1. Login to Solaris 11.2 global zone and check whether the system is supporting kernel zones or not.


UA_GLOBAL# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared

UA_GLOBAL#uname -a
SunOS SAN 5.11 11.2 i86pc i386 i86pc

UA_GLOBAL#virtinfo
NAME CLASS
vmware current
non-global-zone supported
kernel-zone supported

As per the above command output, this hardware will support kernel-zone.

2. System should have atleast 8GB physical memory and 2 virtual processor (2 cores) & 16GB free space for virtual disk.


UA_GLOBAL#prtconf -v |head -4
System Configuration: Oracle Corporation i86pc
Memory size: 8780 Megabytes
System Peripherals (Software Nodes):

 
UA_GLOBAL#psrinfo |wc -l
2
UA_GLOBAL#

 
3. Create a new kernel zone and check the zones configuration.


UA_GLOBAL#zonecfg -z UAKLZ1 create -t SYSsolaris-kz

UA_GLOBAL#zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- UAKLZ1 configured - solaris-kz excl

UA_GLOBAL#zonecfg -z UAKLZ1 info
zonename: UAKLZ1
brand: solaris-kz
autoboot: false
autoshutdown: shutdown
bootargs:
pool:
scheduling-class:
hostid: 0x28c3c78d
tenant:
anet:
lower-link: auto
allowed-address not specified
configure-allowed-address: true
defrouter not specified
allowed-dhcp-cids not specified
link-protection: mac-nospoof
mac-address: auto
mac-prefix not specified
mac-slot not specified
vlan-id not specified
priority not specified
rxrings not specified
txrings not specified
mtu not specified
maxbw not specified
rxfanout not specified
vsi-typeid not specified
vsi-vers not specified
vsi-mgrid not specified
etsbw-lcl not specified
cos not specified
evs not specified
vport not specified
id: 0
device:
match not specified
storage: dev:/dev/zvol/dsk/rpool/VARSHARE/zones/UAKLZ1/disk0
id: 0
bootpri: 0
capped-memory:
physical: 2G
UA_GLOBAL#

 
4. Here is the available zpool on my system. As per previous command output, kernel zone is going to create virtual disk under rpool.


UA_GLOBAL#zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
cloudS 23.8G 0G 23.8G 0% 1.00x ONLINE -
rpool 15.6G 11.6G 4.06G 74% 1.00x ONLINE -
UA_GLOBAL#

In rpool, we do not have 16GB free space. So let me modify the zone's configuration to point cloudS zpool.


5. Invoke zonecfg command to modify the virtual disk.
UA_GLOBAL#zonecfg -z UAKLZ1
zonecfg:UAKLZ1> select device id=0
zonecfg:UAKLZ1:device> info
device:
match not specified
storage.template: dev:/dev/zvol/dsk/%{global-rootzpool}/VARSHARE/zones/%{zonename}/disk%{id}
storage: dev:/dev/zvol/dsk/rpool/VARSHARE/zones/UAKLZ1/disk0
id: 0
bootpri: 0
zonecfg:UAKLZ1:device> set storage=dev:/dev/zvol/dsk/cloudS/zones/
zonecfg:UAKLZ1:device> info
device:
match not specified
storage: dev:/dev/zvol/dsk/cloudS/zones/
id: 0
bootpri: 0
zonecfg:UAKLZ1:device> end
zonecfg:UAKLZ1> commit
zonecfg:UAKLZ1> exit
UA_GLOBAL#

6. You need IPS repository to install the kernel zone. If you do not have a local repository, just set to oracle IPS repo.


UA_GLOBAL# pkg publisher
PUBLISHER TYPE STATUS P LOCATION
solaris origin online F http://pkg.oracle.com/solaris/release/
UA_GLOBAL#

You can set the above repository using,


UA_GLOBAL#pkg set-publisher -O http://pkg.oracle.com/solaris/release solaris

7. Install the kernel zone using below command.


UA_GLOBAL#zoneadm -z UAKLZ1 install
Progress being logged to /var/log/zones/zoneadm.20140806T194800Z.UAKLZ1.install
pkg cache: Using /var/pkg/publisher.
Install Log: /system/volatile/install.8393/install_log
AI Manifest: /tmp/zoneadm7814.pza40p/devel-ai-manifest.xml
SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
Installation: Starting ...

Creating IPS image
Installing packages from:
solaris
origin: http://pkg.oracle.com/solaris/release/
The following licenses have been accepted and not displayed.
Please review the licenses for the following packages post-install:
consolidation/osnet/osnet-incorporation
Package licenses may be viewed using the command:
pkg info --license <pkg_fmri>

DOWNLOAD PKGS FILES XFER (MB) SPEED
Completed 483/483 64276/64276 543.7/543.7 126k/s

PHASE ITEMS
Installing new actions 87530/87530
Updating package state database Done
Updating package cache 0/0
Updating image state Done
Creating fast lookup database Done
Installation: Succeeded
Done: Installation completed in 1355.389 seconds.

UA_GLOBAL#zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- UAKLZ1 installed - solaris-kz excl
UA_GLOBAL#

8. There may be chance that zone may failed to boot due to insufficient resources.


UA_GLOBAL#zoneadm -z UAKLZ1 boot
zone 'UAKLZ1': error: boot failed
zone 'UAKLZ1': error: Failed to create VM: Not enough space
zone 'UAKLZ1': error: allocation of guest RAM failed
zoneadm: zone UAKLZ1: call to zoneadmd(1M) failed: zoneadmd(1M) returned an error 1 (unspecified error)
UA_GLOBAL#

In this case, i just added one more CPU core and booted it.


9. Boot the kernel zone and login to zone's console for initial setup.
root@UA-GLOBAL:~# zoneadm -z UAKLZ1 boot
root@UA-GLOBAL:~# zlogin -C UAKLZ1
[Connected to zone 'UAKLZ1' console]
SC profile successfully generated as:
/etc/svc/profile/sysconfig/sysconfig-20140806-203628/sc_profile.xml

Exiting System Configuration Tool. Log is available at:
/system/volatile/sysconfig/sysconfig.log.300
Hostname: UAKLZ1
UAKLZ1 console login: root
Password:
Aug 7 02:15:40 UAKLZ1 login: ROOT LOGIN /dev/console
Oracle Corporation SunOS 5.11 11.2 June 2014
root@UAKLZ1:~#

10. Here is the interesting output of kernel zones.


root@UAKLZ1:~# zonename
global
root@UAKLZ1:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
root@UAKLZ1:~# virtinfo
NAME CLASS
kernel-zone current
non-global-zone supported
root@UAKLZ1:~#

Kernel zone will be showing as global if you type "zonename" and you can install non-global zone under the kernel zones.


11. You can login to the using zlogin from global without providing the username /password like other non-global zones.
root@SAN:~# zlogin UAKLZ1
[Connected to zone 'UAKLZ1' pts/2]
Oracle Corporation SunOS 5.11 11.2 June 2014
root@UAKLZ1:~# df -h
Filesystem Size Used Available Capacity Mounted on
rpool/ROOT/solaris 15G 2.1G 11G 16% /
/devices 0K 0K 0K 0% /devices
/dev 0K 0K 0K 0% /dev
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 1.7G 1.5M 1.7G 1% /system/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/dev/kz/sdir/shared@0
6.9G 1.7M 6.9G 1% /system/shared
/usr/lib/libc/libc_hwcap1.so.1
13G 2.1G 11G 16% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
rpool/ROOT/solaris/var
15G 122M 11G 2% /var
swap 1.7G 0K 1.7G 0% /tmp
rpool/VARSHARE 15G 2.4M 11G 1% /var/share
rpool/VARSHARE/zones 15G 31K 11G 1% /system/zones
rpool/export 15G 32K 11G 1% /export
rpool/export/home 15G 31K 11G 1% /export/home
rpool 15G 32K 11G 1% /rpool
rpool/VARSHARE/pkg 15G 32K 11G 1% /var/share/pkg
rpool/VARSHARE/pkg/repositories
15G 31K 11G 1% /var/share/pkg/repositories
root@UAKLZ1:~#

12. You manage the network using ipadm in kernel zone itself.



root@UAKLZ1:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
lo0 loopback ok -- --
lo0/v4 static ok -- 127.0.0.1/8
lo0/v6 static ok -- ::1/128
net0 ip ok -- --
net0/v4 static ok -- 192.168.2.59/24
net0/v6 addrconf ok -- fe80::8:20ff:fe24:543/10

13. You need to configure package repository for kernel zone like global for any additional package installation and non-global zone installation.



root@UAKLZ1:~# pkg publisher
PUBLISHER TYPE STATUS P LOCATION
solaris origin online F http://pkg.oracle.com/solaris/release/
root@UAKLZ1:~#

14. In Solaris 11.2 , you can suspend the zone and resume it when you needed. This is similar to VMware's VM suspend and resume functionality. You need to set the suspend file path.


root@SAN:~# zonecfg -z UAKLZ1
zonecfg:UAKLZ1> select suspend
zonecfg:UAKLZ1:suspend> set path=/cloudS/UAKLZ1_suspend
zonecfg:UAKLZ1:suspend> end
zonecfg:UAKLZ1> commit
zonecfg:UAKLZ1> exit
root@UA-GLOBAL:~# zonecfg -z UAKLZ1 info suspend
suspend:
path: /cloudS/UAKLZ1_suspend
storage not specified

root@UA-GLOBAL:~# zoneadm -z UAKLZ1 suspend

root@UA-GLOBAL:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- UAKLZ1 installed - solaris-kz excl
root@SAN:~#

 
15. You can resume the zone using boot command .Once the zone's resumed, the suspend file will be removed. You can also migrate the suspended zone from one global zone to another global zone.


root@UA-GLOBAL:~# cd /cloudS/

root@SAN:/cloudS# ls -lrt
total 507776
drwxr-xr-x 2 root root 2 Aug 7 02:25 other
-rw------- 1 root root 260046848 Aug 7 16:43 UAKLZ1_suspend

root@UA-GLOBAL:/cloudS# du -sh UAKLZ1_suspend
248M UAKLZ1_suspend

root@UA-GLOBAL:/cloudS# zoneadm -z UAKLZ1 boot

root@UA-GLOBAL:/cloudS# ls -lrt
total 3
drwxr-xr-x 2 root root 2 Aug 7 02:25 other

root@UA-GLOBAL:/cloudS# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
3 UAKLZ1 running - solaris-kz excl

root@UA-GLOBAL:/cloudS# zlogin UAKLZ1 uptime
4:49pm up 14:21, 0 users, load average: 0.71, 0.82, 0.39

root@UA-GLOBAL:/cloudS#

 
Hope now you got some idea about kernel zone's configuration, installation and other features.

Oracle Solaris Support for Advanced Format Disks


Documentation by: Raoul Carag and Cindy Swearingen
Previous Oracle Solaris releases support disks with a physical block size and a logical block size of 512 bytes. This is the traditional disk block size that is an industry standard. These disks are generally known as 512n disks for 512 native devices.
Currently, disk manufacturers are providing larger capacity disks known as advanced format (AF) disks, which is a general term that describes a hard disk drive that exceeds a 512-byte block size.
AF disks are generally in the 4-KB block size range, but vary as follows:
  • 4-KB native disk (4kn)—Has a physical and logical block size of 4 KB
  • 512-byte emulation (512e)—Has a physical block size of 4 KB but reports a logical block size of 512 bytes
Current Oracle Solaris releases support 512n disks as well as AF disks.

Identifying an AF Disk's Type

The following examples show how to identify the logical block size and the physical block size of a specified disk, which in turn, identifies whether the disk is 512n, 512e, or 4kn.
The output of the following command identifies the device as a 512n disk.

# devprop -n /dev/rdsk/c2t5000C5001019EBABd0 device-blksize device-pblksize
512
512

The output of the following command identifies the device as a 512e disk.

# devprop -n /dev/rdsk/c2t5000C50010199F2Fd0 device-blksize device-pblksize
512
4096

The output of the following command identifies the device as a 4kn disk.

# devprop -n /dev/rdsk/c2t5000C50010198513d0 device-blksize device-pblksize
4096
4096

 

Identifying the Supported AF Disks for Your Environment

When you consider purchasing AF disks for storage on your Oracle Solaris systems, review the following tables to see which disk type is appropriate for your environment.

Table 1. Support for AF Disks as Non-Root Devices
AF Disk Type
File System/Volume Manager
Oracle Solaris 10 1/13
Oracle Solaris 11 11/11
Oracle Solaris 11.1
Oracle Solaris 11.2
512eZFSYesYesYesYes
UFSYes with performance penaltyYes with performance penaltyYes with performance penaltyYes with performance penalty
SVMYes with performance penaltyYes with performance penaltyYes with performance penaltyYes with performance penalty
4knZFSYesYesYesYes
UFSNoNoNoNo

 
Table 2. Support for AF Disks as Root Devices
AF Disk Type
Platform
Oracle Solaris 10 1/13
Oracle Solaris 11 11/11
Oracle Solaris 11.1
Oracle Solaris 11.2
512eSPARCUFS; ZFSZFSZFSZFS
x86-UEFIN/AZFSZFSZFS
x86-BIOSUFS; ZFS with GRUB patch 15810943ZFSZFSZFS
4knSPARCZFS with OBP 4.34.x and laterZFS with OBP 4.34.x and laterZFS with OBP 4.34.x and laterZFS with OBP 4.34.x and later
x86-UEFIN/ANoNoZFS
x86-BIOSNoNoNoNo

Solaris 11.2 - Server Migration with ZFS “Shadow Migration”


Documentation: by Alexandre Borges

Imagine that we have some data on an older server running Oracle Solaris 11, and we need to migrate this data to a new server running Oracle Solaris 11.1. This is a classic case where we could use a new feature of Oracle Solaris 11 called Shadow Migration. Shadow Migration can also be used to migrate data from systems running Oracle Solaris 10 releases.


Using Shadow Migration is very easy; for example, we could migrate shared ZFS, UFS, or VxFS (Symantec) file systems through NFS or even through a local file system.


To simulate an example of using Shadow Migration to migrate data between two systems, we're going to use two virtual machines that have Oracle Solaris 11 installed (solaris11-1and solaris11-2) in a virtual environment provided by Oracle VM Virtual Box. Furthermore, in the solaris11-2 host, we're going to share a file system (/solaris11-2-pool/migrated-filesystem) and migrate the data inside this file system to the solaris11-1host.

The first step of this procedure is installing the shadow-migration package:


root@solaris11-1:~# pkg install shadow-migration
Packages to install: 1
Create boot environment: No
Create backup boot environment: No
Services to change: 1

DOWNLOAD PKGS FILES XFER (MB) SPEED
Completed 1/1 14/14 0.2/0.2 126k/s

PHASE ITEMS
Installing new actions 39/39
Updating package state database Done
Updating image state Done
Creating fast lookup database Done

 
As you can see, the shadows service is initially stopped:

root@solaris11-1:~# svcs -a | grep shadow
disabled 18:22:13 svc:/system/filesystem/shadowd:default

So we must start it:

root@solaris11-1:~# svcadm enable svc:/system/filesystem/shadowd:default
root@solaris11-1:~# svcs -a | grep shadow
online 18:23:17 svc:/system/filesystem/shadowd:default

On the second machine (from where we want to shadow the file system), the file system must be shared using the NFS service and with the read-only attribute (ro) to avoid changing its contents during the shadowing:

root@solaris11-2:~/SFHA601# share -F nfs -o ro /solaris11-2-pool/migrated-filesystem
root@solaris11-2:~/SFHA601# share
IPC$ smb - Remote IPC
solaris11-2-pool_migrated-filesystem /solaris11-2-pool/migrated-filesystem nfs sec=sys,ro

The advantage of using Shadow Migration is that when the data migration begins, any NFS clients that are accessing the source file system automatically migrate to accessing the target file system.
On the first machine (solaris11-1), we can confirm that solaris11-2 is offering the file system through NFS by running the following command:

root@solaris11-1:/# dfshares solaris11-2
RESOURCE SERVER ACCESS TRANSPORT
solaris11-2:/solaris11-2-pool/migrated-filesystem solaris11-2 - -

Now it's time to shadow the ZFS file system from the second machine (solaris11-2) to the first one (solaris11-1) by creating a file system named rpool/shadow_test:

root@solaris11-1:/rpool# zfs create -o shadow=nfs://solaris11-2/solaris11-2-pool/migrated-filesystem rpool/shadow_test

This can be a slow process. Afterwards, you should execute the shadowstat command:

root@solaris11-1:/rpool# shadowstat
EST
BYTES BYTES ELAPSED
DATASET XFRD LEFT ERRORS TIME
rpool/shadow_test 4.73M - - 00:00:04
rpool/shadow_test 46.0M - - 00:00:14
rpool/shadow_test 52.7M - - 00:00:24
rpool/shadow_test 55.1M - - 00:00:34
rpool/shadow_test 57.5M - - 00:00:44
rpool/shadow_test 58.1M - - 00:00:54
rpool/shadow_test 59.6M 128M - 00:01:04
rpool/shadow_test 62.8M 224M - 00:01:14
rpool/shadow_test 89.0M 187M - 00:01:24
rpool/shadow_test 92.7M 360M - 00:01:34
rpool/shadow_test 120M 168M - 00:01:44
rpool/shadow_test 163M 8E - 00:01:54
rpool/shadow_test 178M 8E - 00:02:04
rpool/shadow_test 178M 8E - 00:02:14
rpool/shadow_test 178M 8E - 00:02:24
rpool/shadow_test 178M 8E - 00:02:34
No migrations in progress

We can verify that the migration finished by running the following command:

root@solaris11-1:/rpool# zfs get -r shadow rpool/shadow_test
NAME PROPERTY VALUE SOURCE
rpool/shadow_test shadow none -

Perfect! Everything worked as expected.

The same procedure could be done using two ZFS file systems. For example, we could create a new file system namedrpool/filesystem_source and copy a directory that contains many files into it:

root@solaris11-1:~# cp -r NetBackup_7.5_Solaris_x86/ /rpool/filesystem_source/
root@solaris11-1:~# zfs set readonly=on rpool/filesystem_source
root@solaris11-1:~# zfs create -o shadow=file:///rpool/filesystem_source rpool/filesystem_target
root@solaris11-1:~# shadowstat
EST
BYTES BYTES ELAPSED
DATASET XFRD LEFT ERRORS TIME
rpool/filesystem_target 107K - - 00:00:04
rpool/filesystem_target 51.2M - - 00:00:14
rpool/filesystem_target 114M - - 00:00:24
rpool/filesystem_target 114M - - 00:00:34
rpool/filesystem_target 114M - - 00:00:44
rpool/filesystem_target 114M - - 00:00:54
rpool/filesystem_target 114M 8E - 00:01:04
rpool/filesystem_target 114M 8E - 00:01:14
rpool/filesystem_target 114M 8E - 00:01:24
rpool/filesystem_target 672M 8E - 00:01:34
rpool/filesystem_target 672M 8E - 00:01:44
rpool/filesystem_target 672M 8E - 00:01:54
rpool/filesystem_target 672M 8E - 00:02:04
rpool/filesystem_target 672M 8E - 00:02:14
rpool/filesystem_target 672M 8E - 00:02:24
rpool/filesystem_target 672M 8E - 00:02:34
No migrations in progress

Wow!!! We repeated the recipe, but we used the appropriate syntax for shadowing between two local ZFS file systems. In the same way, we're able to confirm that the shadowing operation has finished.

Oracle Solaris Cluster 4.2/Oracle Solaris 11.2 - Configure a Failover Oracle Solaris Kernel Zone


About Oracle Solaris Cluster Failover Zones

Oracle Solaris Zones include support for fully independent and isolated environments called kernel zones, which provide a full kernel and user environment within a zone. Kernel zones increase operational flexibility and are ideal for multitenant environments where maintenance windows are significantly harder to schedule. Kernel zones can run at a different kernel version from the global zone and can be updated separately without requiring a reboot of the global zone. You can also use kernel zones in combination with Oracle VM Sever for SPARC for greater virtualization flexibility.
This article describes how to set up a failover kernel zone on a two-node cluster.

Configuration Assumptions

This article assumes the following configuration is used:
  • The cluster is installed and configured with Oracle Solaris 11.2 and Oracle Solaris Cluster 4.2.
  • The repositories for Oracle Solaris and Oracle Solaris Cluster are configured on the cluster nodes.
  • The cluster hardware is a supported configuration for Oracle Solaris Cluster 4.2 software. For more information, see the Oracle Solaris Cluster 4.x Compatibility Guide.
  • The cluster is a two-node SPARC cluster. (However, the installation procedure is applicable to x86 clusters as well.)
  • Each node has two spare network interfaces to be used as private interconnects, also known as transports, and at least one network interface that is connected to the public network.
  • SCSI shared storage is connected to the two nodes.
  • Your setup looks like Figure 1. You might have fewer or more devices, depending on your system or network configuration.
It is recommended that you have console access to the nodes during administration, but this is not required.

Figure 1. Oracle Solaris Cluster hardware configuration

Prerequisites

Ensure the following prerequisites are met:
  1. The boot disk of a kernel zone in an HA zone configuration must reside on a shared disk.
  2. The zone must be configured on each cluster node where the zone can fail over.
  3. The zone must be active on only one node at a time, and the zone's address must be plumbed on only one node at a time.
  4. Make sure you have a shared disk available to host the zonepath for the failover zone. You can use /usr/cluster/bin/scdidadm -L or /usr/cluster/bin/cldevice list to see the shared disks. Each cluster node has a path to the shared disk.
  5. Verify that the Oracle Solaris operating system version is at least 11.2.

    root@phys-schost-1:~# uname -a
    SunOS phys-schost-1 5.11 11.2 sun4v sparc sun4v
  6. Verify that the kernel zone brand package, brand/brand-solaris-kz, is installed on the host.

    root@phys-schost-1# pkg list brand/brand-solaris-kz
    NAME (PUBLISHER) VERSION IFO
    system/zones/brand/brand-solaris-kz 0.5.11-0.175.2.0.0.41.0 i--

  7. Run the virtinfo command to verify that kernel zones are supported on cluster nodes. The following example shows that the kernel zone brand package is installed on the host phys-schost-1.

    root@phys-schost-1:~# virtinfo
    NAME CLASS
    logical-domain current
    non-global-zone supported
    kernel-zone supported
  8. Identify two shared disks, one for the boot disk and the other for the suspend disk. Suspend and resume are supported for a kernel zone only if a kernel zone has a suspend resource property in its configuration. If the suspend device is not configured, it is not possible to use warm migration. Kernel zones support cold and warm migration during switchover. This example uses shared disks d7 and d8. You can use suriadm for looking up the URI for both disks.

    root@phys-schost-1:~# /usr/cluster/bin/scdidadm -L d7 d8
    7 phys-schost-1:/dev/rdsk/c0t60080E500017B5D80000084D52711BB9d0 /dev/did/rdsk/d7
    7 phys-schost-2:/dev/rdsk/c0t60080E500017B5D80000084D52711BB9d0 /dev/did/rdsk/d7
    8 phys-schost-1:/dev/rdsk/c0t60080E500017B5D80000084B52711BAEd0 /dev/did/rdsk/d8
    8 phys-schost-2:/dev/rdsk/c0t60080E500017B5D80000084B52711BAEd0 /dev/did/rdsk/d8
    root@phys-schost-1:~# suriadm lookup-uri /dev/did/dsk/d7
    dev:did/dsk/d7
    root@phys-schost-1:~# suriadm lookup-uri /dev/did/dsk/d8
    dev:did/dsk/d8
  9. The zone source and destination must be on the same platform for zone migration. On x86 systems, the vendor as well as the CPU revision must be identical. On SPARC systems, the zone source and destination must be on the same hardware platform. For example, you cannot migrate a kernel zone from a SPARC T4 host to a SPARC T3 host. 

Enable a Kernel Zone to Run in a Failover Configuration Using a Failover File System

In a failover configuration, the zone's zonepath must reside on a highly available file system. Oracle Solaris Cluster provides theSUNW.HAStoragePlus service to manage a failover file system.
  1. Register the SUNW.HAStoragePlus (HASP) resource type.

    phys-schost-1# /usr/cluster/bin/clrt register SUNW.HAStoragePlus

  2. Create the failover resource group.

    phys-schost-1# /usr/cluster/bin/clrg create sol-kz-fz1-rg
  3. Create a HAStoragePlus resource to monitor the disks that are used as boot or suspend devices for the kernel zone.

    root@phys-schost-1:~# clrs create -t SUNW.HAStoragePlus -g sol-kz-fz1-rg \
    -p GlobalDevicePaths=dsk/d7,dsk/d8 sol-kz-fz1-hasp-rs
    root@phys-schost-1:~# /usr/cluster/bin/clrg online -emM -n phys-schost-1 \ sol-kz-fz1-rg
  4. Create and configure the zone on phys-schost-1. You must ensure that the boot and suspend devices reside on shared disks. For configuring a two-node cluster, execute the following commands on phys-schost-1 and then replicate the zone configuration tophys-schost-2.

    root@phys-schost-1:~# zonecfg -z sol-kz-fz1 'create -b; set brand=solaris-kz;
    add capped-memory;
    set physical=2G; end; add device;
    set storage=dev:did/dsk/d7; set bootpri=1; end; add suspend; set
    storage=dev:did/dsk/d8; end; add anet; set lower-link=auto; end; set autoboot=false; add attr;
    set name=osc-ha-zone; set type=boolean; set value=true; end;'
  5. Verify that the zone is configured.

    phys-schost-1# zoneadm list -cv

    ID NAME STATUS PATH BRAND IP
    0 global running / solaris shared
    - sol-kz-fz1 configured - solaris-kz excl

  6. Install the zone using zoneadm and then boot the zone.

    root@phys-schost-1:~# zoneadm -z sol-kz-fz1 install
    Progress being logged to /var/log/zones/zoneadm.20140829T212403Z.sol-kz-fz1.install
    pkg cache: Using /var/pkg/publisher.
    Install Log: /system/volatile/install.4811/install_log
    AI Manifest: /tmp/zoneadm4203.ZLaaYi/devel-ai-manifest.xml
    SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
    Installation: Starting ...
    Creating IPS image
    Installing packages from:
    solaris
    origin: http://solaris-publisher.domain.com/support/sru/
    ha-cluster
    origin: http://cluster-publisher.domain.com/solariscluster/sru/
    The following licenses have been accepted and not displayed.
    Please review the licenses for the following packages post-install:
    consolidation/osnet/osnet-incorporation
    Package licenses may be viewed using the command:
    pkg info --license <pkg_fmri>

    DOWNLOAD PKGS FILES XFER (MB) SPEED
    Completed 482/482 64261/64261 544.1/544.1 1.9M/s

    PHASE ITEMS
    Installing new actions 87569/87569
    Updating package state database Done
    Updating package cache 0/0
    Updating image state Done
    Creating fast lookup database Done
    Installation: Succeeded
    Done: Installation completed in 609.014 seconds.
  7. Verify that the zone is successfully installed and boots up.

    phys-schost-1# zoneadm list -cv
    ID NAME STATUS PATH BRAND IP
    0 global running / solaris shared
    - sol-kz-fz1 installed - solaris-kz excl
  8. In another window, log in to the zone's console and boot the zone. Follow the prompts through the system configuration interactive screens to configure the zone.

    phys-schost-1# zlogin -C sol-kz-fz1
    phys-schost-1# zoneadm -z sol-kz-fz1 boot
  9. Shut down the zone and switch the resource group to another node available in the list of resource group nodes.

    phys-schost-1# zoneadm -z sol-kz-fz1 shutdown
    phys-schost-1# zoneadm -z sol-kz-fz1 detach -F
    phys-schost-1# /usr/cluster/bin/clrg switch -n phys-schost-2 sol-kz-fz1-rg
    phys-schost-1# zoneadm list -cv
    ID NAME STATUS PATH BRAND IP
    0 global running / solaris shared
    - sol-kz-fz1 configured - solaris-kz excl
  10. Copy the zone configuration to the second node and create the kernel zone on the second node using the configuration file.

    root@phys-schost-1:~# zonecfg -z sol-kz-fz1 export -f \
    /var/cluster/run/sol-kz-fz1.cfg
    root@phys-schost-1:~# scp /var/cluster/run/sol-kz-fz1.cfg phys-schost- \
    2:/var/cluster/run/
    root@phys-schost-1:~# rm /var/cluster/run/sol-kz-fz1.cfg
    root@phys-schost-2:~# zonecfg -z sol-kz-fz1 -f /var/cluster/run/sol-kz-\
    fz1.cfg
    root@phys-schost-2:~# rm /var/cluster/run/sol-kz-fz1.cfg
  11. Attach the zone and verify that the zone can boot on the second node. Log in from another session to ensure that the zone boots up fine.

    root@phys-schost-2:~# zoneadm -z sol-kz-fz1 attach -x force-takeover
    root@phys-schost-2:~# zoneadm -z sol-kz-fz1 boot
    root@phys-schost-2:~# zlogin -C sol-kz-fz1

  12. Shut down and detach the zone.

    root@phys-schost-2:~# zoneadm -z sol-kz-fz1 shutdown
    root@phys-schost-2:~# zoneadm -z sol-kz-fz1 detach -F
  13. Install the failover zone agent if it is not already installed.

    root@phys-schost-1# pkg install ha-cluster/data-service/ha-zones
    root@phys-schost-2# pkg install ha-cluster/data-service/ha-zones
  14. To create the resource from any one node, edit the sczbt_config file and set the parameters as shown below.

    root@phys-schost-2:~# clrt register SUNW.gds
    root@phys-schost-2:~# cd /opt/SUNWsczone/sczbt/util
    root@phys-schost-2:~# cp -p sczbt_config sczbt_config.sol-kz-fz1-rs
    root@phys-schost-2:~# vi sczbt_config.sol-kz-fz1-rs
    RS=sol-kz-fz1-rs
    RG=sol-kz-fz1-rg
    PARAMETERDIR=
    SC_NETWORK=false
    SC_LH=
    FAILOVER=true
    HAS_RS=sol-kz-fz1-hasp-rs
    RG=sol-kz-fz1-rg
    Zonename="sol-kz-fz1"
    Zonebrand="solaris-kz"
    Zonebootopt=""
    Milestone="svc:/milestone/multi-user-server"
    LXrunlevel="3"
    SLrunlevel="3"
    Mounts=""
    Migrationtype="warm"
  15. To configure the zone-boot resource, set the parameters in the zone-boot configuration file.

    root@phys-schost-2:~# ./sczbt_register -f ./sczbt_config.sol-kz-fz1-rs
    sourcing ./sczbt_config.kz
    Registration of resource kz-rs succeeded.
    root@phys-schost-2:~# /usr/cluster/bin/clrs enable sol-kz-fz1-rs
  16. Check the status of the resource groups and resources.

    root@phys-schost-2:~# /usr/cluster/bin/clrs status -g sol-kz-fz1-rg
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    ------------------- ------------- ----- -------------------
    sol-kz-fz1-rs phys-schost-1 Online Online - Service is online.
    phys-schost-2 Offline Offline

    sol-kz-fz1-hasp-rs phys-schost-1 Online Online
    phys-schost-2 Offline Offline
    root@phys-schost-2:~#
  17. Log in using the zlogin -C sol-kz-fz1 command to verify that zone successfully boots up and then switch to other node to test switchover.

    root@phys-schost-2:~# /usr/cluster/bin/clrg switch -n phys-schost-1 sol-kz-fz1-rg
    root@phys-schost-2:~# /usr/cluster/bin/clrs status -g sol-kz-fz1-rg
    === Cluster Resources ===

    Resource Name Node Name State Status Message
    ------------------- ---------- ----- -------------------
    sol-kz-fz1-rs phys-schost-1 Online Online
    phys-schost-2 Offline Offline

    ha-zones-hasp-rs phys-schost-1 Online Online
    phys-schost-2 Offline Offline
    root@phys-schost-2:~#