SunOS Sukhoi 5.10 Generic_147440-01 sun4u sparc SUNW,Sun-Fire-480R
bash-3.2# zpool create rpool c1t1d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c1t1d0s0 is part of exported or potentially active ZFS pool rpool. Please see zpool(1M).
bash-3.2# zpool create -f rpool c1t1d0s0
bash-3.2# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 77.5K 32.7G 31K /rpool
bash-3.2# lustatus
ERROR: No boot environments are configured on this system
ERROR: cannot determine list of all boot environment names
bash-3.2# lucreate -c sol_stage1 -n sol_stage2 -p rpool
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named
Creating initial configuration for primary boot environment
INFORMATION: No BEs are configured on this system.
The device
is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name
Updating boot environment description database on all BEs.
Updating system configuration files.
The device is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment
Source boot environment is
Creating file systems on boot environment
Creating
Populating file systems on boot environment
Analyzing zones.
Mounting ABE
Generating file list.
Copying data from PBE
22% of filenames transferred
100% of filenames transferred
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE
Making boot environment
Creating boot_archive for /.alt.tmp.b-Vqf.mnt
updating /.alt.tmp.b-Vqf.mnt/platform/sun4u/boot_archive
Population of boot environment
Creation of boot environment
bash-3.2# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
sol_stage1 yes yes yes no -
sol_stage2 yes no no yes -
bash-3.2# luactivate sol_stage2
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Enter the PROM monitor (ok prompt).
2. Change the boot device back to the original boot environment by typing:
setenv boot-device
/pci@9,600000/SUNW,qlc@2/fp@0,0/disk@w21000004cffb7f68,0:a
3. Boot to the original boot environment by typing:
boot
**********************************************************************
Modifying boot archive service
Activation of boot environment
Reboot the server with init 6 to boot from new boot environment.
bash-3.2# lustatus
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
sol_stage1 yes no no yes -
sol_stage2 yes yes yes no -
bash-3.2# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 6.90G 25.8G 33.5K /rpool
rpool/ROOT 4.38G 25.8G 31K legacy
rpool/ROOT/sol_stage2 4.38G 25.8G 4.38G /
rpool/dump 2.00G 25.8G 2.00G -
rpool/swap 529M 26.4G 16K -
bash-3.2# ludelete -f sol_stage1
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
deleted old boot environment disk for rpool mirroring
bash-3.2# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
sol_stage2 yes yes yes no -
bash-3.2# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
errors: No known data errors
bash-3.2# ! format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cffb7f68,0
1. c1t1d0
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cffb7da9,0
Specify disk (enter its number):
bash-3.2# prtvtoc /dev/rdsk/c1t1d0s2 |fmthard -s - /dev/rdsk/c1t0d0s2
fmthard: New volume table of contents now in place.
bash-3.2# zpool attach rpool c1t1d0s0 c1t0d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c1t0d0s0 contains a ufs filesystem.
bash-3.2# zpool attach -f rpool c1t1d0s0 c1t0d0s0
Make sure to wait until resilver is done before rebooting.
bash-3.2# zpool status
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Thu Jan 12 05:20:44 2017
1.31G scanned out of 6.38G at 36.4M/s, 0h2m to go
1.31G scanned out of 6.38G at 36.4M/s, 0h2m to go
1.31G resilvered, 20.59% done
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0 (resilvering)
errors: No known data errors
bash-3.2# zpool status
pool: rpool
state: ONLINE
scan: resilvered 6.38G in 0h5m with 0 errors on Thu Jan 12 05:25:59 2017
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
errors: No known data errors
Regards
Gurudatta N.R
No comments:
Post a Comment