Labels

hpunix (63) marathi kavita (52) linux (21) solaris11 (11) AWS (5) numerology (5)
Showing posts with label oracle. Show all posts
Showing posts with label oracle. Show all posts

Monday, July 7, 2025

Solaris 11.4 Installation steps from beginning

Solaris 11.4 Installation steps from beginning 


This document will help you in installing Solaris 11.4 on SPARC Hardware

1.       Here we are installing Solaris 11.4 from the ISO file. So, search in google or download it from MOS (My Oracle Support) and attach it to ILOM.

https://www.oracle.com/solaris/solaris11/downloads/solaris11-install-downloads.html

Link: https://support.oracle.com/portal/


2.       Below is the method to do the same.

Login to ILOM GUI session. Go to Remote Controlà Redirectionà Launch Remote console.

(You may need to download java package in case you get some java related error)

 


3.       You will see console screen like this.

 


4.       Go to KVMSàStorageà Add iso file à connect.

 

 


5.       Add and connect ISO file here from the path where you have stored. So it will be visible on server as boot device.

 


 

Now Take CLI session of you ILOM.

6.       And stop your system and wait for some time;

 

 



 

7.       you can see the power state off and then start the system. And to see what is happening in the background take /SP/console



8.       Below command will search boot devices. Here we want to boot from the ISO image and the “rcdrom” is showing alias for this. We will boot from rcdrom.




9.       After booting, it will ask to choose keyboard and language.



10.   Since we want to install oracle solaris freshly; we will select option 1


11.   Actual Solaris installation will start from here. Press F2 to continue and move to next field.


12.Select your disk where you want to install your Solaris OS.



13.   We can mention required Server hostname here.


14.   Please mention your network configuration here.

 








15.   Please set correct date and time here.


16.   We can set root password here.



17.   This will give installation summary to you. If any rectification needed, then please rectify.



18.   This will start installation and will show you progress (It will take approx. 1hr).


19.   Once installation completes properly, you will see below screen. Press F8 for system reboot.

 


20.   You will get login prompt that means installation is competed. Once the server is up; we can verify whether Solaris 11.4 OS is installed on the server or not using below commands.



 



Regards,
Kiran Jadhav

Wednesday, July 20, 2022

Exadata Patching steps from Beginning

 Exadata Patching steps from start to end: 

* Exadata patching doc number : doc id 888828.1

* Patching sequence: Below patching sequence we have to follow while doing exadata patching:

  1. DB/Grid patching

  2. Cell node patching

  3. Compute node patching

  4. Roce/Ibswitch patching 

Patching steps:

1. Open the doc id 888828.1 in MOS (My Oracle Support) to know the latest (N) or N-1 patch information. Identify the exact patch you want to download

2. Copy the patches to the server ( there will be approx 10 files)

3. After copying patches to the server : 

[root@testJan_2022]# ls -lrth

total 30G

-rw-r--r-- 1 root root 3.1G Apr 22 17:05 p33567288_210000_Linux-x86-64_1of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 17:18 p33567288_210000_Linux-x86-64_3of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 17:27 p33567288_210000_Linux-x86-64_9of10.zip

-rw-r--r-- 1 root root 1.7G Apr 22 17:33 p33567288_210000_Linux-x86-64_10of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 18:18 p33567288_210000_Linux-x86-64_2of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 18:20 p33567288_210000_Linux-x86-64_7of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 18:24 p33567288_210000_Linux-x86-64_5of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 18:58 p33567288_210000_Linux-x86-64_8of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 19:00 p33567288_210000_Linux-x86-64_6of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 19:06 p33567288_210000_Linux-x86-64_4of10.zip


4. Then unzip all patch files

#unzip p33567288_210000_Linux-x86-64_1of10.zip

.

#unzip p33567288_210000_Linux-x86-64_10of10.zip

or 

#unzip '*.zip'


4. Unzipping files will create tar files something like below:

-rw-r--r-- 1 root root 3.1G Jan 21 15:12 33567288.tar.splitaa

-rw-r--r-- 1 root root 3.1G Jan 21 15:12 33567288.tar.splitab

-rw-r--r-- 1 root root 3.1G Jan 21 15:13 33567288.tar.splitac

-rw-r--r-- 1 root root 3.1G Jan 21 15:13 33567288.tar.splitad


5. Now untar all files to create a common patch repo files:

#cat *.tar.* | tar -xvf -


6. Now unzip patch files from below directory in order to get dbnodeupdate.sh and patchmgr scripts.

#cd /QFSDP/Jan_2022/33567288/Infrastructure/SoftwareMaintenanceTools/DBNodeUpdate/21.211221

#cd /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/FabricSwitch

#cd /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/ExadataStorageServer_InfiniBandSwitch


7. Once DB team confirm that they are done with the DB/GRID patching then we can start cellnode patching.

8. Before doing actual patching we have to raise the prechecks SR in MOS and need to upload necessary logs like sosreport and sundiag from compute nodes and cell nodes, exachk from one of the compute node and the prechecks logs from compute node and cell nodes.

8.a Below is the cell node prechecks commands:

#cd /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/ExadataStorageServer_InfiniBandSwitch/patch_21.2.8.0.0.220114.1

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group -reset_force

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group -cleanup

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group -patch_check_prereq -rolling -ignore_alerts

Note : If error found in prechecks get it rectified with the help of backend team.

The possible error could be:

https://kiranbjadhav.blogspot.com/2022/05/exadata-cell-node-patching-errorusb.html

8.b If there is no error found in prechecks then we can proceed with the actual patching commands.

Actual patching command :

#cd /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/ExadataStorageServer_InfiniBandSwitch/patch_21.2.8.0.0.220114.1

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group -reset_force

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group -cleanup

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group -patch -ignore_alerts   

all cells patching will get completed (DB needs to be down here)

If you are doing cellnode patching without DB downtime then 

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group -patch -rolling -ignore_alerts   

all cells will get patched in rolling mode one by one.


If you want to do manual cell node patching taking one cell at a time then

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group_1 -patch -rolling -ignore_alerts  ---> 1 cell at a time assuming cell_group_1 has only one cell entry.

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group_2 -patch -rolling -ignore_alerts and so on


9. Once cell nodes patching completed successfully then we can start with Compute node patching 

9.a Compute node patching prechecks

#cd /QFSDP/Jan_2022/33567288/Infrastructure/SoftwareMaintenanceTools/DBNodeUpdate/21.211221

#./dbnodeupdate.sh -u -l /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/ExadataDatabaseServer_OL7/p33665705_212000_Linux-x86-64.zip -v 

The possible error could be:

https://kiranbjadhav.blogspot.com/2022/05/exadata-compute-node-patching-prechecks.html


9.b Actual patching command: 

Prerequisite:

. Once compute node at a time and 

. DB and CRS must be down on the particular compute node

. NFS mount point should be unmounted

#./dbnodeupdate.sh -u -l /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/ExadataDatabaseServer_OL7/p33665705_212000_Linux-x86-64.zip


9.c After successful compute node upgrade run below command to finish post steps.

#./dbnodeupdate.sh -c 


10. After successful compute node patching we can do ROCE switch or IBswitch patching

10.a Roce switch patching prechecks:

#cd /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/FabricSwitch/patch_switch_21.2.8.0.0.220114.1

#./patchmgr --roceswitches /roceswitches.lst --upgrade --roceswitch-precheck --log_dir /scratchpad/


10.b Actual patching command:

./patchmgr --roceswitches /roceswitches.lst --upgrade --log_dir /scratchpad/


10.c If ibswitches are there instead of Roce switches

#cd /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/ExadataStorageServer_InfiniBandSwitch/patch_21.2.8.0.0.220114.1

  ./patchmgr -ibswitches /opt/oracle.SupportTools/onecommand/ibs_group -upgrade -ibswitch_precheck


10.d Actual patching command:

./patchmgr -ibswitches /opt/oracle.SupportTools/onecommand/ibs_group -upgrade



Regards,

Kiran Jaadhav


Thursday, February 18, 2021

How to enable disk locator on ZFS disk

 How to enable disk locator on ZFS disk:

Suppose there is disk failure on ZFS, we can make disk locator "ON" so the failed disk can be identified easily at the time of disk replacement.

Login to ZFS shell login prompt and run below commands:

==================================================

ZFSC1:> maintenance hardware

ZFSC1:maintenance hardware> list

             NAME         STATE     MANUFACTURER  MODEL                     SERIAL        RPM    TYPE

chassis-003  1645HEN05Y   faulted   Oracle        Oracle Storage DE2-24C    1645HEN05Y    7200   hdd


Here for chasis (chassis-000, chassis-001, chassis-002 etc..) the disk status is showing as 'OK' and for chassis-003 it is showing status as 'faulted' so we can say one of the disk present in chasis-003 could be failed. 

ZFSC1:maintenance hardware> select chassis-003

ZFSC1:maintenance chassis-003> list

                          disk

                           fan

                           psu

                          slot

HBBLMSZFSC1:maintenance chassis-003> select disk

HBBLMSZFSC1:maintenance chassis-003 disk>

HBBLMSZFSC1:maintenance chassis-003 disk> show

Disks:

          LABEL   STATE     MANUFACTURER  MODEL             SERIAL                        RPM    TYPE


disk-000  HDD 0   ok        HGST          H7390A250SUN8.0T  000555PJG4LV        VLJJG4LV  7200   data


disk-001  HDD 1   ok        HGST          H7390A250SUN8.0T  000555PJXALV        VLJJXALV  7200   data


disk-002  HDD 2   ok        HGST          H7390A250SUN8.0T  000555PGGRPV        VLJGGRPV  7200   data


disk-003  HDD 3   faulted   HGST          H7390A250SUN8.0T  000555PGHD0V        VLJGHD0V  7200   data


ZFSC1:maintenance chassis-003 disk> select disk-003

ZFSC1:maintenance chassis-003 disk-003> ls

Properties:

                         label = HDD 3

                       present = true

                       faulted = true

                  manufacturer = HGST

                         model = H7390A250SUN8.0T

                        serial = 000555PGHD0V        VLJGHD0V

                      revision = P9E2

                          size = 7.15T

                          type = data

                           use = data

                           rpm = 7200

                        device = c0t5000CCA2608B17DCd0

                     pathcount = 2

                     interface = SAS

                        locate = false

                       offline = false


ZFSC1:maintenance chassis-003 disk-003> set locate=true

                        locate = true (uncommitted)

ZFSC1:maintenance chassis-003 disk-003> commit

ZFSC1:maintenance chassis-003 disk-003> ls

Properties:

                         label = HDD 3

                       present = true

                       faulted = true

                  manufacturer = HGST

                        locate = true

                       offline = false

ZFSC1:maintenance chassis-003 disk-003>


Regards,

Kiran Jaddhav