Thursday, March 19, 2015

Solaris 11: Resolve ZFS Device faults/alerts using fmadm


Procedure:
  1. Identify the faulted device with the fmadm faulty command. For example:
  2. Replace the faulty or retired device or clear the device error.
  3. Clear the FMA fault.
  4. Confirm that the fault is cleared.
1. Identify the faulted device with the fmadm faulty command. 
For example:
# fmadm faulty
————— ———————————— ————– ———
TIME EVENT-ID MSG-ID SEVERITY
————— ———————————— ————– ———
Jun 20 16:30:52 55c82fff-b709-62f5-b66e-b4e1bbe9dcb1 ZFS-8000-LR Major

Problem Status : solved
Diag Engine : zfs-diagnosis / 1.0
System Manufacturer : unknown
Name : ORCL,SPARC-T3-4
Part_Number : unknown
Serial_Number : 1120BDRCCD
Host_ID : 84a02d28

—————————————-
Suspect 1 of 1 :
Fault class : fault.fs.zfs.open_failed
Certainty : 100%
Affects : zfs://pool=86124fa573cad84e/vdev=25d36cd46e0a7f49/
pool_name=pond/vdev_name=id1,sd@n5000c500335dc60f/a
Status : faulted and taken out of service

FRU Name : "zfs://pool=86124fa573cad84e/vdev=25d36cd46e0a7f49/
pool_name=pond/vdev_name=id1,sd@n5000c500335dc60f/a"
Status : faulty

Description : ZFS device 'id1,sd@n5000c500335dc60f/a' in pool 'pond' failed to open.

Response : An attempt will be made to activate a hot spare if available.

Impact : Fault tolerance of the pool may be compromised.

Action : Use 'fmadm faulty' to provide a more detailed view of this event.
Run 'zpool status -lx' for more information. Please refer to the associated reference document at http://support.oracle.com/msg/ZFS-8000-LR for the latest service procedures and policies regarding this diagnosis.


2. Replace the faulty or retired device or clear the device error.
For example:
# zpool clear pond c0t5000C500335DC60Fd0
If an intermittent device error occurred but the device was not replaced, you can attempt to clear the previous error.

 

3. Clear the FMA fault. For example:
# fmadm repaired zfs://pool=86124fa573cad84e/vdev=25d36cd46e0a7f49/pool_name=pond/vdev_name=id1,sd@n5000c500335dc60f/a

fmadm: recorded repair to of zfs://pool=86124fa573cad84e/vdev=25d36cd46e0a7f49/pool_name=pond/vdev_name=id1,sd@n5000c500335dc60f/a
4. Confirm that the fault is cleared.
# fmadm faulty
If the error is cleared, the fmadm faulty command returns nothing.

Solaris 11: Administration of SCSI devices using cfgadm


Below are the operations that can be performed on a SCSI device
  • Connect a SCSI Controller
  • Add a SCSI Device to a SCSI Bus
  • Replace an SCSI Disk on a SCSI Controller 
  • Remove a SCSI Device 
Connect a SCSI Controller

Step 1: Verify that the device is disconnected before you connect it.
# cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c2 unavailable disconnected configured unknown
c2::dsk/c2t0d0 unavailable disconnected configured unknown
c3 scsi-sas connected configured unknown
c3::dsk/c3t0d0 disk connected configured unknown
c3::dsk/c3t1d0 disk connected configured unknown
c3::dsk/c3t2d0 disk connected configured unknown
c3::dsk/c3t3d0 disk connected configured unknown

Step 2: Connect the SCSI controller.
# cfgadm -c connect c2

Step 3: Verify that the SCSI controller is connected.
# cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c2 scsi-bus connected configured unknown
c2::dsk/c2t0d0 CD-ROM connected configured unknown
c3 scsi-sas connected configured unknown
c3::dsk/c3t0d0 disk connected configured unknown
c3::dsk/c3t1d0 disk connected configured unknown
c3::dsk/c3t2d0 disk connected configured unknown
c3::dsk/c3t3d0 disk connected configured unknown

Add a SCSI Device to a SCSI Bus

Step 1: Identify the current SCSI configuration.
# cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c2 scsi-bus connected configured unknown
c2::dsk/c2t0d0 CD-ROM connected configured unknown
c3 scsi-sas connected configured unknown
c3::dsk/c3t0d0 disk connected configured unknown
c3::dsk/c3t1d0 disk connected configured unknown
c3::dsk/c3t2d0 disk connected configured unknown

Step 2: Add the SCSI device to the SCSI bus.

2a. Type the following cfgadm command.
For example:

# cfgadm -x insert_device c3
Adding device to SCSI HBA: /devices/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2
This operation will suspend activity on SCSI bus: c3

2b. Type y at the Continue (yes/no)? prompt to proceed.
Continue (yes/no)? y
SCSI bus quiesced successfully.
It is now safe to proceed with hotplug operation.
I/O activity on the SCSI bus is suspended while the hot-plug operation is in progress.

2c. Connect the device and then power it on.

2d. Type y at the Enter y if operation is complete or n to abort (yes/no)? prompt.
Enter y if operation is complete or n to abort (yes/no)? y


Step 3: Verify that the device has been added.
# cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c2 scsi-bus connected configured unknown
c2::dsk/c2t0d0 CD-ROM connected configured unknown
c3 scsi-sas connected configured unknown
c3::dsk/c3t0d0 disk connected configured unknown
c3::dsk/c3t1d0 disk connected configured unknown
c3::dsk/c3t2d0 disk connected configured unknown
c3::dsk/c3t3d0 disk connected configured unknown
A new disk has been added to controller c3.

Replace an SCSI Disk on a SCSI Controller  


Step 1: Identify the current SCSI configuration.
# cfgadm -al
c2 scsi-bus connected configured unknown
c2::dsk/c2t0d0 CD-ROM connected configured unknown
c3 scsi-sas connected configured unknown
c3::dsk/c3t0d0 disk connected configured unknown
c3::dsk/c3t1d0 disk connected configured unknown
c3::dsk/c3t2d0 disk connected configured unknown
c3::dsk/c3t3d0 disk connected configured unknown

Step 2: Replace a device on the SCSI bus with another device of the same type.
2a. Type the following cfgadm command.
For example:

 # cfgadm -x replace_device c3::dsk/c3t3d0
Replacing SCSI device: /devices/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@3,0
This operation will suspend activity on SCSI bus: c3


2b. Type y at the Continue (yes/no)? prompt to proceed.
I/O activity on the SCSI bus is suspended while the hot-plug operation is in progress.

 Continue (yes/no)? y
SCSI bus quiesced successfully.
It is now safe to proceed with hotplug operation.

2c. Power off the device to be removed and remove it.

2d. Add the replacement device. Then, power it on.
The replacement device should be of the same type and at the same address (target and LUN) as the device to be removed.


2e. Type y at the Enter y if operation is complete or n to abort (yes/no)? prompt.
Enter y if operation is complete or n to abort (yes/no)? y


Step 3 : Verify that the device has been replaced.
# cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c2 scsi-bus connected configured unknown
c2::dsk/c2t0d0 CD-ROM connected configured unknown
c3 scsi-sas connected configured unknown
c3::dsk/c3t0d0 disk connected configured unknown
c3::dsk/c3t1d0 disk connected configured unknown
c3::dsk/c3t2d0 disk connected configured unknown
c3::dsk/c3t3d0 disk connected configured unknown

Remove a SCSI Device 

Step 1: Identify the current SCSI configuration.
# cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c2 scsi-bus connected configured unknown
c2::dsk/c2t0d0 CD-ROM connected configured unknown
c3 scsi-sas connected configured unknown
c3::dsk/c3t0d0 disk connected configured unknown
c3::dsk/c3t1d0 disk connected configured unknown
c3::dsk/c3t2d0 disk connected configured unknown
c3::dsk/c3t3d0 disk connected configured unknown

Step 2: Remove the SCSI device from the system.
2a. Type the following cfgadm command.
For example:

# cfgadm -x remove_device c3::dsk/c3t3d0
Removing SCSI device: /devices/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@3,0
This operation will suspend activity on SCSI bus: c3

2b. Type y at the Continue (yes/no)? prompt to proceed.
Continue (yes/no)? y
SCSI bus quiesced successfully.
It is now safe to proceed with hotplug operation.
I/O activity on the SCSI bus is suspended while the hot-plug operation is in progress.

2c. Power off the device to be removed and remove it.

2d. Type y at the Enter y if operation is complete or n to abort (yes/no)? prompt.
Enter y if operation is complete or n to abort (yes/no)? y
Note – This step must be performed if you are removing a SCSI RAID device from a SCSI RAID array.


Step 3: Verify that the device has been removed from the system.
# cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c2 scsi-bus connected configured unknown
c2::dsk/c2t0d0 CD-ROM connected configured unknown
c3 scsi-sas connected configured unknown
c3::dsk/c3t0d0 disk connected configured unknown
c3::dsk/c3t1d0 disk connected configured unknown
c3::dsk/c3t2d0 disk connected configured unknown

Solaris 10/11: How to enable/disable automount debugging

If you are having problems with an automounter directory, you should always try and mount the partition by hand, to verify that the problem is related to automounter, and not NFS in general. If this manual mount operation fails then the problems is with NFS. But if this mount works fine, the problem is with automounter.
Automounter also has built in debugging, which can be used to examine exactly what the automounter is doing. It is best to kill automount and restart it with the debug flags, so that you can see everything from the start. Below procedure can be used to enable or disable automount debugging.

Solaris 10 
Enable Solaris 10 automount debug output
1. Uncomment the following lines in the /etc/default/autofs file:
# Verbose mode.  Notifies of autofs mounts, unmounts, or other
# non-essential events.  This equivalent to the "-v" argument.
AUTOMOUNT_VERBOSE=TRUE
# Verbose.  Log status messagess to the console.
# This is equivalent to the "-v" argument.
AUTOMOUNTD_VERBOSE=TRUE
# Trace.  Expand each RPC call and display it on standard output.
# This is equivalent to the "-T" argument.
AUTOMOUNTD_TRACE=3

DISABLE Solaris 10 automount debug output

2. Revert /etc/default/autofs entries back to default to shut off logging:
#AUTOMOUNT_VERBOSE=FALSE
#AUTOMOUNTD_VERBOSE=FALSE
#AUTOMOUNTD_TRACE=0

Refer the file /etc/default/autofs


Solaris 11
Enable Solaris 11 automount debug output 
1.  Alter the autofs debugging properties with sharectl:
root@my-nfs-server# sharectl get autofs
timeout=600
automount_verbose=false
automountd_verbose=false
nobrowse=false
trace=0
environment=


root@my-nfs-server# sharectl set -p automount_verbose=true autofs
root@my-nfs-server# sharectl set -p automountd_verbose=true autofs
root@my-nfs-server# sharectl set -p trace=3 autofs
root@my-nfs-server# sharectl get autofs
timeout=600
automount_verbose=true
automountd_verbose=true
nobrowse=false
trace=3
environment=
  
2. Tail the autofs log: 
root@my-nfs-server# tail -f /var/svc/log/system-filesystem-autofs:default.log
[ Jan 23 09:48:36 Stopping because service restarting. ]
[ Jan 23 09:48:36 Executing stop method ("/lib/svc/method/svc-autofs stop 76″) ]
[ Jan 23 09:48:42 Method "stop" exited with status 0 ]
[ Jan 23 09:48:42 Executing start method ("/lib/svc/method/svc-autofs start") ]
[ Jan 23 09:48:42 Method "start" exited with status 0 ]
[ Jan 23 09:49:48 Stopping because service restarting. ]
[ Jan 23 09:49:48 Executing stop method ("/lib/svc/method/svc-autofs stop 8276539″) ]
[ Jan 23 09:49:53 Method "stop" exited with status 0 ]
[ Jan 23 09:49:53 Executing start method ("/lib/svc/method/svc-autofs start") ]
[ Jan 23 09:49:53 Method "start" exited with status 0 ]

3. Restart the autofs service:

# svcadm restart autofs

4. Examine output from the tail -f command to ensure the logging is now enabled:
[ Jan 23 09:56:18 Stopping because service restarting. ]
[ Jan 23 09:56:18 Executing stop method ("/lib/svc/method/svc-autofs stop 8276563″) ]
[ Jan 23 09:56:23 Method "stop" exited with status 0 ]
[ Jan 23 09:56:23 Executing start method ("/lib/svc/method/svc-autofs start") ]
t1      init_ldap: setting up for version 2
automount: /net mounted
automount: /home mounted
automount: no unmounts
[ Jan 23 09:56:23 Method "start" exited with status 0 ]

Reproduce or await the automount activity that triggers the failure, as appropriate.  Attach or provide the /var/svc/log/system-filesystem-autofs:default.log.

DISABLE Solaris 11 automount debug output
1. Restore the original values to the autofs debug properties:
root@my-nfs-server# sharectl set -p automount_verbose=false autofs
root@my-nfs-server# sharectl set -p automountd_verbose=false autofs
root@my-nfs-server# sharectl set -p trace=0 autofs
root@my-nfs-server# sharectl get autofs
timeout=600
automount_verbose=false
automountd_verbose=false
nobrowse=false
trace=0
environment=

2. Restart autofs service:
# svcadm restart autofs

Wednesday, March 18, 2015

Solaris 10: Live upgrade with ZFS rpool - Example


Solaris 10 Live Upgrade with ZFS is really simple compared to some of the messes you could get into with SVM mirrored root disks. Below is a simple live upgrade BE creation and patching example. Also, Solaris Live Upgrade works the same as in previous releases when you use ZFS. The same commands. As I said, it's just easier. Also, a really great feature is that you can now migrate from UFS file systems to a ZFS root pool and create new boot environments within a ZFS root pool. Will't show that in another blog entry at a later date.
# lucreate -n Dec2012
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named – assigning name <s10s_u10wos_17b>.
Current boot environment is named <s10s_u10wos_17b>.
Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
INFORMATION: No BEs are configured on this system.
The device </dev/dsk/c0t5000CCA02533AC20d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c0t5000CCA02533AC20d0s0>.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <Dec2012>.
Source boot environment is <s10s_u10wos_17b>.
Creating file systems on boot environment <Dec2012>.
Populating file systems on boot environment <Dec2012>.
Analyzing zones.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@Dec2012>.
Creating clone for <rpool/ROOT/s10s_u10wos_17b@Dec2012> on <rpool/ROOT/Dec2012>.
Mounting ABE <Dec2012>.
Generating file list.
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <Dec2012>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <s10s_u10wos_17b>.
Making boot environment <Dec2012> bootable.
Population of boot environment <Dec2012> successful.
Creation of boot environment <Dec2012> successful.

now patch it. Lets move to my patch folder where I unzipped my stuff 
# cd /root/10_Recommended/patches
# luupgrade -n Dec2012 -s /root/10_Recommended/patches -t `cat patch_order`
Validating the contents of the media </root/10_Recommended/patches>.
The media contains 358 software patches that can be added.
Mounting the BE <Dec2012>.
---------------- SNIP -------------------------------------------------
Patch 146054-07 has been successfully installed.
See /a/var/sadm/patch/146054-07/log for details
Executing postpatch script…
Patch packages installed:
SUNWcsu
SUNWxcu6
Checking installed patches…
Executing prepatch script…
Installing patch packages…
Patch 125555-12 has been successfully installed.
See /a/var/sadm/patch/125555-12/log for details
Executing postpatch script…
Patch packages installed:
SUNWcsu
Checking installed patches…
Installing patch packages…
---------------- SNIP ------------------------------------------------Un
mount the BE <Dec2012>.
The patch add to the BE <Dec2012> completed.

Now we activate the new BE
# luactivate Dec2012
A Live Upgrade Sync operation will be performed on startup of boot environment <Dec2012>.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Enter the PROM monitor (ok prompt).
2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:
At the PROM monitor (ok prompt):
For boot to Solaris CD:  boot cdrom -s
For boot to network:     boot net -s
3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:
zpool import rpool
zfs inherit -r mountpoint rpool/ROOT/s10s_u10wos_17b
zfs set mountpoint=<mountpointName> rpool/ROOT/s10s_u10wos_17b
zfs mount rpool/ROOT/s10s_u10wos_17b
4. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:
<mountpointName>/sbin/luactivate
5. luactivate, activates the previous working boot environment and
indicates the result.
6. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Activation of boot environment <Dec2012> successful.

All done. Lets see if it looks ok,
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
————————–                 ——–       ——     ———       ——     ———-
s10s_u10wos_17b            yes      yes    no        no     –
Dec2012                    yes      no     yes       no     –

Yes, Dec2012 is now the active BE. Now we reboot, WITHOUT using REBOOT command.
# shutdown -i6 -y -g0
Shutdown started.    Thu Dec 20 13:11:38 EST 2012
Changing to init state 6 – please wait
Broadcast Message from root (pts/1) on mybox.ca Thu Dec 20 13:11:39…
THE SYSTEM mybox.ca IS BEING SHUT DOWN NOW ! ! !
Log off now or risk your files being damaged

System has rebooted now. Lets look:
# uname -a
SunOS mybox.ca  5.10 Generic_147440-26 sun4v sparc sun4v

# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
————————–                 ——–       ——     ———       ——     ———-
s10s_u10wos_17b            yes      no     no        yes    –
Dec2012                    yes      yes    yes       no     –

New kernel. Yes. Dec2012 is the booted and active BE. Hope this helps.

Tuesday, March 17, 2015

Solaris 10: Migrating From UFS to ZFS with Live Upgrade


Solaris 10 Live Upgrade with ZFS is really simple compared to some of the messes you could get into with SVM mirrored root disks. Below is a simple live upgrade BE creation and patching example. Also, Solaris Live Upgrade works the same as in previous releases when you use ZFS. The same commands. As I said, it's just easier. Also, a really great feature is that you can now migrate from UFS file systems to a ZFS root pool and create new boot environments within a ZFS root pool.

Create the new rpool
You will need a new disk to be used as your ZFS boot disk. First task is to create a new root pool or rpool. You have to create a new boot environment in that rpool from the existing UFS boot and root file system.
In this example, the zfs list command shows the ZFS root pool created by the zpool command. The next zfs listcommand shows the datasets created by the lucreate command.

# zpool create rpool c0t2d0s2

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 12.4G 90.1G 20K /rpool

We now need to create the new Boot Environment BE using the existing boot disk (UFS based) as the source and use the newly created rpool and the destination BE.

Create a new ZFS Boot Environment

# lucreate -c c0t0d0 -n Nov2012-zfsBE -p rpool

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 12.4G 90.1G 20K /rpool
rpool/ROOT 8.12G 90.1G 18K /rpool/ROOT
rpool/ROOT/Nov2012-zfsBE 8.12G 90.1G 551M /tmp/.alt.luupdall.899001
rpool/dump 3.95G - 3.95G -
rpool/swap 3.95G - 3.95G -

This is so cool, and easy. All that is left is to patch the new Nov2012 BE, activate it and reboot. So lets say we had the typical Oracle (old Sun) patch cluster.

Patch the new ZFS based Boot Environment Using luupgrade

To patch the new BE, I would do:

# luupgrade -n Nov2012-zfsBE -s /root/10_Recommended/patches -t `cat patch_order`
>snip to remove all the patching output.

Then we activate the new ZFS based BE so we can boot off of the new disk.

Activate the new ZFS based Boot Environment

# luactivate Nov2012-zfsBE
A Live Upgrade Sync operation will be performed on startup of boot environment <Nov2012-zfsBE>.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************

 
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Enter the PROM monitor (ok prompt).
2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:
At the PROM monitor (ok prompt):
For boot to Solaris CD:  boot cdrom -s
For boot to network:     boot net -s
3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:
zpool import rpool
zfs inherit -r mountpoint rpool/ROOT/Nov2012-zfsBE
zfs set mountpoint=<mountpointName> rpool/ROOT/Nov2012-zfsBE
zfs mount rpool/ROOT/Nov2012-zfsBE
4. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:
<mountpointName>/sbin/luactivate
5. luactivate, activates the previous working boot environment and
indicates the result.
6. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Activation of boot environment <Nov2012-zfsBE> successful.

Let's see if it worked. Yes, Nov2012-zfsBE is now Active on Reboot. Let's reboot with init 6.

Boot from your new ZFS root disk

# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Nov2012-zfsBE        yes      no     yes       no     -

 
# init 6

Login and check patch level and see if the new software I installed is there.

# uname -a
SunOS mygreatbox 5.10 Generic_147440-26 sun4v sparc sun4v

How easy and cool was that? The handy thing with luactivate is it changes the boot environment for you, meaning the OBP boot device settings don't have to be changed, it is all handled by the operating system during the BE activation process.

Solaris: How to Clone LDOM

Solaris ZFS based LDOMs can be cloned either by command or even by script.  This greatly speeds up deployments in an environment where Solaris VMs need to be built quickly. Oracles (formerly Sun Microsystems) LDOM virtualization technology can be automated and greatly simplified when combined with the ZFS filesystem. LDOM's are Sun's spin on VMWare, or at least at best can be roughly compared to VMWare. One difference being that LDOM's assign physical hardware as compared to VMWare where time slices on resources are used in typical configurations. This comparison is based on typical implementations. The LDOM's are based on the CMT processor architectures found in the T4's and T2 type SPARC systems. I won't get into a big description of the technology. Suffice it to say, it is highly configurable. In this blog, I show how from a  simple script, guest LDOM's can be created or cloned very quickly with a fully bootable and running system with an independent kernel and hardware.

Setting up primary control and service domains
In any LDOM configuration, at a minimum a control and service domain is required. The configuration used in this example does not support dual service domains for virtual IO which provides the ultimate in availability.

Create virtual disk service

# ldm add-vds primary-vds0 primary

Create virtual console

#  ldm add-vcc port-range=5000-5100 primary-vcc0 primary

Add virtual switch

      # ldm add-vsw net-dev=e1000g0 primary-vsw0 primary

Create control domain and give it services.  T4-X's don't need the SSL chip settings :), if you aren't using a T4-x system, then set-mau is needed.
# ldm set-mau 0 primary
# ldm set-vcpu 4 primary
# ldm start-reconf primary
# ldm set-memory 4G primary

Adding and setting spconfig to initial
# ldm add-spconfig initial
# shutdown -i6 -g0 now

Configuring the network component enabling networking between control and service domain needs to be done from the console. Otherwise you will blow your knees out from under you. Then it gets awkward 
# ifconfig vsw0 plumb
# ifconfig e1000g0 down unplumb
# fconfig vsw0 147.28.18.28 netmask 0xffffff00 broadcast + up
# mv /etc/hostname.e1000g0 /etc/hostname.vsw0

Enable vntsd
# svcadm enable vntsd
# reboot

Setup ZFS based golden image LDOM
First setup the storage for the LDOM. As an example, lets create a 1 TB zpool (Waaaay better than VxVM & LVM). So I run the zpool create command and feed it the device nodes for the 2 multipathed HDS LUN's.
# zpool create LDOM_disk c0t60060E8006FF03000000FF0300001500d0 c0t60060E8006FF03000000FF0300001501d0
# zpool status LDOM_disk
pool: LDOM_disk
state: ONLINE
scan: none requested
config:
NAME                                     STATE     READ WRITE CKSUM
LDOM_disk                                ONLINE       0     0     0
c0t60060E8006FF03000000FF0300001500d0  ONLINE       0     0     0
c0t60060E8006FF03000000FF0300001501d0  ONLINE       0     0     0
errors: No known data errors
# zfs create -V 15g LDOM_disk/golden

Ok, we've created a golden zfs volume under the LDOM_disk pool. Dam that was easy. SVM or VxVM/LVM as sweet as it used to be, can't compare to ZFS for configuration simplicity. Ok, move on. Lets build an LDOM with our basic configuration. I have a fully loaded T4 with 256 CPU threads, and oceans of silicon for memory. So I'm being generous here. But really, this is just my golden image, so it's not really important what my resources are here, as it won't be used. So I'll give it just enough resources to install and configure a healthy Solaris 10 kernel.
# ldm add-domain golden
# ldm add-vcpu 2 golden
# ldm add-memory 8 G golden
# ldm add-vnet vnet1 primary-vsw0 golden

Add disk to virtual disk server

# ldm add-vdsdev /dev/zvol/dsk/LDOM_disk/golden vol0@primary-vds0

Add disk to golden LDOM

# ldm add-vdisk vdisk0 vol0@primary-vds0 golden

Set autoboot var

# ldm set-var auto-boot?=false golden

Map ISO for boot and installation
# ldm add-vdsdev /root/sol-10-u10-ga2-sparc-dvd.iso iso@primary-vds0
# ldm add-vdisk vdisk_iso iso@primary-vds0 golden
# ldm bind-domain golden
# ldm start-domain golden
# ldm start-domain golden
LDOM golden started

# ldm list-bindings golden
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
golden          active     -t—-  5000    24    32G      4.2%  20s
UUID
8939999e-0d72-e842-a93e-ae86b57c5dc6
MAC
00:14:4f:fb:c5:4f
HOSTID
0x84fbc54f
CONTROL
failure-policy=ignore
extended-mapin-space=off
cpu-arch=native
DEPENDENCY
master=
CORE
CID    CPUSET
1      (8, 9, 10, 11, 12, 13, 14, 15)
2      (16, 17, 18, 19, 20, 21, 22, 23)
3      (24, 25, 26, 27, 28, 29, 30, 31)
VCPU
VID    PID    CID    UTIL STRAND
0      8      1      100%   100%
1      9      1      0.0%   100%
2      10     1      0.0%   100%
3      11     1      0.0%   100%
4      12     1      0.0%   100%
5      13     1      0.0%   100%
6      14     1      0.0%   100%
7      15     1      0.0%   100%
8      16     2      0.0%   100%
9      17     2      0.0%   100%
10     18     2      0.0%   100%
11     19     2      0.0%   100%
12     20     2      0.0%   100%
13     21     2      0.0%   100%
14     22     2      0.0%   100%
15     23     2      0.0%   100%
16     24     3      0.0%   100%
17     25     3      0.0%   100%
18     26     3      0.0%   100%
19     27     3      0.0%   100%
20     28     3      0.0%   100%
21     29     3      0.0%   100%
22     30     3      0.0%   100%
23     31     3      0.0%   100%
MEMORY
RA               PA               SIZE
0x40000000       0x140000000      32G
CONSTRAINT
threading=max-throughput
VARIABLES
auto-boot?=false
NETWORK
NAME             SERVICE                     ID   DEVICE     MAC               MODE   PVID VID
MTU   LINKPROP
vnet1            primary-vsw0@primary        0    network@0  00:14:4f:f9:44:b9        1
1500
PEER                        MAC               MODE   PVID VID                  MTU   LINKPROP
primary-vsw0@primary        00:14:4f:fb:ed:02        1                         1500
DISK
NAME             VOLUME                      TOUT ID   DEVICE  SERVER         MPGROUP
vdisk0           vol0@primary-vds0                0    disk@0  primary
vdisk_iso        iso@primary-vds0                 1    disk@1  primary
VCONS
NAME             SERVICE                     PORT
golden          primary-vcc0@primary        5000

Get yourself onto the console and boot off of the ISO and install the operating system.
# telnet localhost 5000
Trying 127.0.0.1…
Connected to localhost.
Escape character is '^]'.
Connecting to console "golden" in group "golden" ….
Press ~? for control options ..
~
{0} ok


{0} ok devalias
vdisk_iso                /virtual-devices@100/channel-devices@200/disk@1
vdisk0                   /virtual-devices@100/channel-devices@200/disk@0
vnet1                    /virtual-devices@100/channel-devices@200/network@0
net                      /virtual-devices@100/channel-devices@200/network@0
disk                     /virtual-devices@100/channel-devices@200/disk@0
virtual-console          /virtual-devices/console@1
name                     aliases

The vdisk_iso is a device alias that was created in the step above. So you can boot and install off of this device. Use the OBP boot command as shown below.

{0} ok boot vdisk_iso:f

NOTE: jumpstart works great here too. But in this case, I had an iso handy.
Go through install process. Add whatever software you need, DNS settings. Whatever you need for your base configs. I typically just give the LDOM a bogus IP address because I will be changing in once I boot the new guest LDOM created from this golden image. You can also in some cases run a sys_unconfig command and start with a clean slate. It's your choice.
Stop and unbind the guest golden image domain.

# ldm stop golden

Snap shot the disk image.

# zfs snapshot LDOM_disk/golden@golden-image

We have a fully bootable LDOM image with whatever version of the operating system installed. This is perfect. We now use this image to create the future guest LDOM's by cloning the image.

Script to create ZFS based golden image LDOM
Use the following script based on building the LDOMs with the golden image. This of course can be tweaked to change features such as vcup's and memory. The script is basic with no error checking. It uses only 2 positional paramater, as in 2 arguments. The first arg $1 is the LDOM name, second $2 is the volume name for the storage.
# cat ./clone-LDOM.sh
#---------------------- START OF SCRIPT -----------------------
echo Setting up clone for $1 with $2 storage
sleep 2
zfs clone LDOM_disk/golden@golden-image LDOM_disk/$1
echo creating domain
ldm add-domain $1
ldm add-vcpu 2 $1
ldm add-memory 4G $1
echo network
ldm add-vnet vnet1 primary-vsw0 $1
echo storage
ldm add-vdsdev /dev/zvol/dsk/LDOM_disk/$1 ${2}@primary-vds0
echo adding disk to $1 LDOM
ldm add-vdisk vdisk0 ${2}@primary-vds0 $1
echo set autoboot var
ldm set-var auto-boot?=false $1
echo binding
ldm bind-domain $1
ldm start-domain $1
ldm list-domain $1
#---------------------- START OF SCRIPT -----------------------

Assumption is you have many vcpu's and tons of disk. Output of running the script. I called the script c
lone-LDOM.sh.  Lets run the script and create an LDOM and associate a ZFS volume called test-ldm-vol to the LDOM. Kay, lets go:
# ./clone-LDOM.sh test-ldom test-ldm-vol
Setting up clone for test-ldom with test-ldm-vol storage
creating domain
network
storage
adding disk to test-ldom LDOM
set autoboot var
binding
LDOM test-ldom started
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
test-ldom          active     ——  5013    2     4G       0.0%  0s

That was fast. Lets have a look and see how it looks:
# ldm list -e test-ldom
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
test-ldom          active     -t—-  5013    2     4G        50%  8m
SOFTSTATE
OpenBoot Running
UUID
70ba2009-37a1-ec70-a62e-f084b30057ca
MAC
00:14:4f:fb:7d:e7
HOSTID
0x84fb7de7
CONTROL
failure-policy=ignore
extended-mapin-space=off
cpu-arch=native
DEPENDENCY
master=
CORE
CID    CPUSET
14     (112, 113)
VCPU
VID    PID    CID    UTIL STRAND
0      112    14     100%   100%
1      113    14     0.0%   100%
MEMORY
RA               PA               SIZE
0x5f800000       0xe5f800000      4G
CONSTRAINT
threading=max-throughput
VARIABLES
auto-boot?=false
NETWORK
NAME             SERVICE                     ID   DEVICE     MAC               MODE   PVID VID
MTU   LINKPROP
vnet1            primary-vsw0@primary        0    network@0  00:14:4f:fb:f0:89        1
1500
DISK
NAME             VOLUME                      TOUT ID   DEVICE  SERVER         MPGROUP
vdisk0           test-ldm-vol@primary-vds0          0    disk@0  primary
VLDCC
NAME             SERVICE                     DESC
ds               primary-vldc0@primary       domain-services
VCONS
NAME             SERVICE                     PORT
test-ldom          primary-vcc0@primary        5013

Looks great. The virtual console primary-vcc0@primary is on port 5013 meaning this is the 14th LDOM on this system. Using this basic script you can create dozens of LDOMs with the golden image I created in the earlier steps. I would boot the newly created LDOMs to single user mode and change the hostname and ip address before going multiuser. Of course this script can be expanded to feed it memory and cpu values to give it more flexibilty. Nothing stopping you from added more memory and CPU later. Depending on your version of Domain Manager, you can use Dynamic Reconfiguration to change resources on the fly.

Lets boot the LDOM to single user mode. So I need to get to the console which should get me the OBP ok prompt. At which point I will boot it to single user mode and change the hostname and IP address and possibly routing information if need be.
# telnet localhost 5013
Trying 127.0.0.1…
Connected to localhost.
Escape character is '^]'.
Connecting to console "test-ldom" in group "test-ldom" ….
Press ~? for control options ..
{0} ok

{0} ok boot -vs
Boot device: /virtual-devices@100/channel-devices@200/disk@0  File and args: -vs
module /platform/sun4v/kernel/sparcv9/unix: text at [0x1000000, 0x10c1c1d] data at 0x1800000
module /platform/sun4v/kernel/sparcv9/genunix: text at [0x10c1c20, 0x12a6b77] data at 0x1935f40
module /platform/sun4v/kernel/misc/sparcv9/platmod: text at [0x12a6b78, 0x12a6b8f] data at 0x198d598
module /platform/sun4v/kernel/cpu/sparcv9/SPARC-T4: text at [0x12a6b90, 0x12ad04f] data at 0x198dcc0
SunOS Release 5.10 Version Generic_147440-01 64-bit
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
Ethernet address = 0:14:4f:fb:7d:e7
mem = 4194304K (0x100000000)
avail mem = 3821568000
root nexus = SPARC T4-4
pseudo0 at root
pseudo0 is /pseudo
scsi_vhci0 at root
scsi_vhci0 is /scsi_vhci
virtual-device: cnex0
cnex0 is /virtual-devices@100/channel-devices@200
vdisk@0 is online using ldc@14,0
channel-device: vdc0
vdc0 is /virtual-devices@100/channel-devices@200/disk@0
root on rpool/ROOT/s10s_u10wos_17b fstype zfs
pseudo-device: dld0
dld0 is /pseudo/dld@0
cpu0: SPARC-T4 (chipid 0, clock 2998 MHz)
cpu1: SPARC-T4 (chipid 0, clock 2998 MHz)
iscsi0 at root
iscsi0 is /iscsi
Booting to milestone "milestone/single-user:default".
pseudo-device: zfs0
zfs0 is /pseudo/zfs@0
WARNING: vnet0 has duplicate address 010.238.198.220 (in use by 00:14:4f:fa:68:fa); disabled
Nov 30 08:17:45 svc.startd[10]: svc:/network/physical:default: Method "/lib/svc/method/net-physical" fai
led with exit status 96.
Nov 30 08:17:45 svc.startd[10]: network/physical:default misconfigured: transitioned to maintenance (see
'svcs -xv' for details)
Hostname: rocker
pseudo-device: devinfo0
devinfo0 is /pseudo/devinfo@0
pseudo-device: pseudo1
pseudo1 is /pseudo/zconsnex@1
pseudo-device: lockstat0
lockstat0 is /pseudo/lockstat@0
pseudo-device: fcode0
fcode0 is /pseudo/fcode@0
pseudo-device: llc10
llc10 is /pseudo/llc1@0
pseudo-device: lofi0
lofi0 is /pseudo/lofi@0
pseudo-device: trapstat0
trapstat0 is /pseudo/trapstat@0
pseudo-device: fbt0
fbt0 is /pseudo/fbt@0
pseudo-device: profile0
profile0 is /pseudo/profile@0
pseudo-device: systrace0
systrace0 is /pseudo/systrace@0
pseudo-device: sdt0
sdt0 is /pseudo/sdt@0
pseudo-device: fasttrap0
fasttrap0 is /pseudo/fasttrap@0
pseudo-device: ntwdt0
ntwdt0 is /pseudo/ntwdt@0
pseudo-device: mdesc0
mdesc0 is /pseudo/mdesc@0
pseudo-device: ds_snmp0
ds_snmp0 is /pseudo/ds_snmp@0
pseudo-device: ds_pri0
ds_pri0 is /pseudo/ds_pri@0
pseudo-device: bmc0
bmc0 is /pseudo/bmc@0
pseudo-device: fcsm0
fcsm0 is /pseudo/fcsm@0
pseudo-device: fssnap0
fssnap0 is /pseudo/fssnap@0
pseudo-device: winlock0
winlock0 is /pseudo/winlock@0
pseudo-device: vol0
vol0 is /pseudo/vol@0
pseudo-device: pm0
pm0 is /pseudo/pm@0
pseudo-device: pool0
pool0 is /pseudo/pool@0
dump on /dev/zvol/dsk/rpool/dump size 1536 MB
Requesting System Maintenance Mode
SINGLE USER MODE
Root password for system maintenance (control-d to bypass):

Ok, we have logged in to single usermode using the password we configured in the golden image. Lets make some minore changes. These are basic but cleanly boot your system
# echo "23.45.66.111   test-ldom test-ldom.mydomain.com" > /etc/hosts
# echo test-ldom > /etc/hostname.vnet0
# echo test-ldom > /etc/nodename
# hostname test-ldom

Ok, that's enough for a basic setup. Should boot multi-user happily and let us install…. whatever you want to run.
^D
# svc.startd: Returning to milestone all.
pseudo-device: drctl0
drctl0 is /pseudo/drctl@0
pseudo-device: ramdisk1024
ramdisk1024 is /pseudo/ramdisk@1024
pseudo-device: dtrace0
dtrace0 is /pseudo/dtrace@0
pseudo-device: fcp0
fcp0 is /pseudo/fcp@0
Nov 30 08:22:23 ldmad: agent agent-device registered
Nov 30 08:22:23 ldmad: agent agent-system registered
Nov 30 08:22:23 ldmad: agent agent-dio registered
syslogd: line 24: WARNING: loghost could not be resolved
test-ldom console login: root
Password:
Nov 30 08:22:36 test-ldom pseudo: pseudo-device: devinfo0
Nov 30 08:22:36 test-ldom genunix: devinfo0 is /pseudo/devinfo@0
Nov 30 08:22:36 test-ldom login: ROOT LOGIN /dev/console
Last login: Thu Nov 29 13:46:20 on console
Oracle Corporation      SunOS 5.10      Generic Patch   January 2005
#

From here, you can add more disk, 300 IP addresses, even build branded zones if you need to. God forbid. And if you are really keen, you can do this:

As I said, this script can easily be modified to set the number of CPU's or memory settings as well. Through dynamic reconfiguration, this can be done on the fly later on too.