Labels

hpunix (63) marathi kavita (52) linux (21) solaris11 (11) AWS (5) numerology (5)
Showing posts with label solaris11. Show all posts
Showing posts with label solaris11. Show all posts

Friday, June 21, 2024

How to mirror root disks in Solaris 11

 How to mirror root disks in Solaris 11


Mirroring root disks is used for redundancy purpose i.e. incase of failure of 1 disk the data will be accessible from other disks. If we want to mirror the root disk in rpool then we need to follow below process.

In this example we are mirroring root disk which are locally present on server. We can mirror the disks which are coming from external storage. 


1.       To know the disks attached to the server.

#echo |format






1.       Check rpool status. Here it is showing only 1 disk is present in rpool.

#zpool status rpool






1.       Attach one more disk of same size to rpool.

#zpool attach -f rpool <disk1> <New disk_disk2>




1.       Resilvering will start on new disk and till that time you will see the particular pool in Degraded state.

 

What is resilvering: Resilvering operation is moving data from the good copies to the new device.







1.       Once resilvering done the pool will have status ONLINE and you will be see 2 disks in mirrored.






Thanks,

Kiren Jadhav

Sunday, January 2, 2022

How to stop scrubbing in ZFS

 How to stop scrubbing in ZFS:

The simplest way to check data integrity is to initiate an explicit scrubbing of all data within the pool. This operation traverses all the data in the pool once and verifies that all blocks can be read.  

1. Command to check scrubbing is running or not? and how much time it will take to finish scrubbing:

#zpool status

Above command will show you pool name on which scrubbing is running.

========================================

Scrubbing might negatively impact performance, increases I/O operation. So sometime we need to stop the scrubbing.

2. Command to stop scrubbing. 

#zpool scrub -s <pool name>


Regards,

Kiren Jadhav

Thursday, December 30, 2021

How to check ZFS snapshot

How to check ZFS snapshot

ZFS snapshot:

A snapshot is a read-only copy of a file system or volume. Snapshots can be created almost instantly, and they initially consume no additional disk space within the pool. However, as data within the active dataset changes, the snapshot consumes disk space by continuing to reference the old data, thus preventing the disk space from being freed.

Command to check snapshots

1. zfs list -t snapshot

Snapshots can be destroyed using below command. 

1.  zfs destroy <snapshot name> 


Regards,

Kumar Jadhav

Monday, August 16, 2021

How to reboot ZFS appliance

 How to reboot ZFS appliance :


Below is the command to reboot ZFS storage:

Login to ak shell prompt and then run below command:

#maintenance system reboot


Regards,

Kalyanjit

Sunday, July 18, 2021

How to run sundiag on multiple cell nodes - exadata or SSC

 How to run sundiag on multiple cell nodes - exadata or SSC:

What is sundiag:

sundiag is Oracle Exadata Database Machine - Diagnostics Collection Tool which collects diagnostics information which help the support analyst in diagnosing problem such as failed hardware like a failed disk, etc.

In Exadata box or solaris supercluster (SSC) we may have multiple storage cell nodes attached. 

If we have 10-12 storage cells nodes then instead of login to each and every cells and collecting sundiag will be a time consuming task. By below one command we can run sundiag on multiple servers (passwordless ssh should be there from the compute node to the cell nodes).

1. on Solaris super cluster:

#dcli -g /opt/oracle.supercluster/bin/cell_group -l root /opt/oracle.SupportTools/sundiag.sh

where # cat /opt/oracle.supercluster/bin/cell_group  --> will list number of cell nodes attached to the SSC machine


2. on Exadata servers:

#dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root /opt/oracle.SupportTools/sundiag.sh

where # cat /opt/oracle.SupportTools/onecommand/cell_group  --> will list number of cell nodes attached to the Exadata machine


Thank U

- Kiiran B Jaadhav

Thursday, June 3, 2021

How to find serial number of cell nodes - Exadata or SSC

 How to find serial number of cell nodes:


In Exadata box or solaris supercluster (SSC) we may have multiple storage cell nodes attached. 

If we have 10-12 storage cells nodes then instead of login to each and every cells we can find out the serial number by using below command:


1. In Solaris super cluster:

#dcli -g /opt/oracle.supercluster/bin/cell_group -l root "dmidecode -s system-serial-number"

where # cat /opt/oracle.supercluster/bin/cell_group  --> will list number of cell nodes attached to the SSC machine

2. In Exadata servers:

#dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root "dmidecode -s system-serial-number"

where # cat /opt/oracle.SupportTools/onecommand/cell_group  --> will list number of cell nodes attached to the Exadata machine

Thank U

- Kiiran B Jaadhav

Tuesday, May 11, 2021

How to split big files in Unix

 How to split big files in solaris:

Sometime the log files is too large to send backend team for further analysis, in that case we break the files into multiple pieces using split command.


Whenever we split a large file with split command then split output file’s default size is 1000 lines and its default prefix would be ‘x’.

Suppose we want to send crash dump (vdump) file which is 2GB of size. We will break that file into 4 pieces of size 500MB.


In that case:

#split -b 500M vdump.24 = This will break vdump.24 file into 4 pieces of name Xa , Xb, Xc, Xd


If we feel aa ab are little bit confusing file names then better to give specific name at the time of spliting the file itself.

#split -b 500M vdump.24 vdump_11May2021_X = It will create new files of name vdump_11May2021_Xa ,  vdump_11May2021_Xb


If we want to split file into GB then we may use command something like:

#split -b 1G vdump.24 vdump_11May2021_1

or

#split -b 1024M vdump.24 vdump_11May2021_1



Regards,

Kiren Jadhav

Friday, March 26, 2021

How to increase session timeout value in ILOM + solaris

 How to increase session timeout value in ILOM:

The session gets disconnected if it is unattended for more than 15-20 minutes, depending on setting done in your system. Sometime, at the time of patching we may face problem due to session 

timeout. So it is better to increase the timeout value from server or from ILOM.


Here we are increasing session timeout value for ILOM.

1. Login to ILOM from CLI and get the timeout value.


-> show /SP/cli

 /SP/cli

    Targets:

    Properties:

        legacy_targets = disabled

        prompt = (none)

        timeout = 20


In our case timeout = 20, that means after 20minutes the session will get disconnected if it unattended.

2. The timeout value indicates the number of minutes for session timeout (1–1440).

3. Here we are setting timeout value to 60 i.e. 1hour or 60min.

-> set /SP/cli timeout=60

Set 'timeout' to '60'


-> show /SP/cli

 /SP/cli

    Targets:

    Properties:

        legacy_targets = disabled

        prompt = (none)

        timeout = 60

->

Note : Setting a timeout value of 0 disables the timeout feature.


Thank U !

Kiran Jaddhav

Thursday, February 18, 2021

How to enable disk locator on ZFS disk

 How to enable disk locator on ZFS disk:

Suppose there is disk failure on ZFS, we can make disk locator "ON" so the failed disk can be identified easily at the time of disk replacement.

Login to ZFS shell login prompt and run below commands:

==================================================

ZFSC1:> maintenance hardware

ZFSC1:maintenance hardware> list

             NAME         STATE     MANUFACTURER  MODEL                     SERIAL        RPM    TYPE

chassis-003  1645HEN05Y   faulted   Oracle        Oracle Storage DE2-24C    1645HEN05Y    7200   hdd


Here for chasis (chassis-000, chassis-001, chassis-002 etc..) the disk status is showing as 'OK' and for chassis-003 it is showing status as 'faulted' so we can say one of the disk present in chasis-003 could be failed. 

ZFSC1:maintenance hardware> select chassis-003

ZFSC1:maintenance chassis-003> list

                          disk

                           fan

                           psu

                          slot

HBBLMSZFSC1:maintenance chassis-003> select disk

HBBLMSZFSC1:maintenance chassis-003 disk>

HBBLMSZFSC1:maintenance chassis-003 disk> show

Disks:

          LABEL   STATE     MANUFACTURER  MODEL             SERIAL                        RPM    TYPE


disk-000  HDD 0   ok        HGST          H7390A250SUN8.0T  000555PJG4LV        VLJJG4LV  7200   data


disk-001  HDD 1   ok        HGST          H7390A250SUN8.0T  000555PJXALV        VLJJXALV  7200   data


disk-002  HDD 2   ok        HGST          H7390A250SUN8.0T  000555PGGRPV        VLJGGRPV  7200   data


disk-003  HDD 3   faulted   HGST          H7390A250SUN8.0T  000555PGHD0V        VLJGHD0V  7200   data


ZFSC1:maintenance chassis-003 disk> select disk-003

ZFSC1:maintenance chassis-003 disk-003> ls

Properties:

                         label = HDD 3

                       present = true

                       faulted = true

                  manufacturer = HGST

                         model = H7390A250SUN8.0T

                        serial = 000555PGHD0V        VLJGHD0V

                      revision = P9E2

                          size = 7.15T

                          type = data

                           use = data

                           rpm = 7200

                        device = c0t5000CCA2608B17DCd0

                     pathcount = 2

                     interface = SAS

                        locate = false

                       offline = false


ZFSC1:maintenance chassis-003 disk-003> set locate=true

                        locate = true (uncommitted)

ZFSC1:maintenance chassis-003 disk-003> commit

ZFSC1:maintenance chassis-003 disk-003> ls

Properties:

                         label = HDD 3

                       present = true

                       faulted = true

                  manufacturer = HGST

                        locate = true

                       offline = false

ZFSC1:maintenance chassis-003 disk-003>


Regards,

Kiran Jaddhav

Monday, December 14, 2020

Zoneadm commands list in solaris 11

Zoneadm commands list in solaris 11:


The zoneadm utility is used to administer system zones. 

Run below commands from Global zone:

#zoneadm list -civ  --> to list the zones and their status like running , uninstall etc..

#zoneadm -z (zone_name) halt  --> To halt the zone

#zoneadm -z  (zone_name) boot --> to boot the halted zone

#zoneadm -z (zone_name) reboot --> to reboot the zone


To login zones:

#zlogin (zone_name) -- To login particular zone



Regards,

Kirraan Jadhav

Tuesday, August 11, 2020

How to generate (ak) Support bundles on ZFS storage

How to generate (ak) Support bundles on ZFS storage:

Step 1 : Login to ZFS

 zfsc1:> maintenance system bundle

bundle   bundles

zfsc1:> maintenance system bundle    ---> Main command

The support data you requested is being built in 3 files. Use 'send <srn>' with each bundle to associate the bundle with a Service Request Number and send it to

Oracle Support. Alternatively, you may download the bundles via the appliance BUI.

    ak.938k008c-9e13-4593-9305-b54d87659710.tar.gz

    ak.3b22kdbf-c95i-4b6a-be85-d89f58b18b97.tar.gz

    ak.eee2580k-bfi8-6442-defe-e9faccd32fb7.tar.gz


then 

zfsc1:> confirm shell              ----- We will get shell prompt

step 2 : Go to location where bundles get stored

# cd /var/ak/bundles           -- bundles will be stored here with the current date

step 3: We can check the same in ps command as well.

#ps -ef |grep -i bund = to check support bundle process are running or not


Example:

zfsc1# ps -ef |grep -i bund

    root 16467   389   0 13:03:19 ?           0:10 /usr/bin/bash -p /usr/bin/akbundle -b 938k008c-9e13-4593-9305-b54d87659710 -c C

    root  2251 16467   0 13:13:19 ?           0:05 /usr/bin/bash -p /usr/bin/akbundle -b 938k008c-9e13-4593-9305-b54d87659710 -c C


Regards,

Kiren Jadhav