Labels

hpunix (63) marathi kavita (52) linux (21) solaris11 (11) AWS (5) numerology (5)

Friday, November 22, 2024

ODA Patching from version 19.21 to 19.23

 ODA Patching from version 19.21 to 19.23


This document will help you in ODA patching to upgrade from version 19.21 to version 19.23

If we have ODA box with version 19.17 then we cannot directly patch it to 19.23 version. We must patch the ODA box to version to 19.21 and then we can go on patch 19.23.

To perform ODA patching VM snapshots and RMAN backup is most important; if anything fails then we can restore VMs from the snapshot.

1.       Download 19.23 patches from Oracle site:

https://docs.oracle.com/en/engineered-systems/oracle-database-appliance/19.23/cmtrn/oda-patches.html#GUID-ACB179CD-5901-405C-B732-AD6923D78339

 

                     Patch list:

                 Oracle Database Appliance Server Patch for the ODACLI/DCS stack (patch 36524605),

                 Oracle Grid Infrastructure clone files (patch 30403673),

                 Oracle Database clone files (patch 30403662)

2.       Login to the Oda bare metal (BM) servers and note down the VMs and DB system details.

#virsh list    >>> This command shows VM names and we will be taking VM snapshots for the same before starting patching

 

#odacli list-vms  >>> This command list VMs information like VM names, they are active on which node,current state, target state etc…

 

3.       Identify the vdisk assosiated with the respective VM

#virsh domblklist <VM Name>

 

4.       Here we are taking snapshot of vda and vdb disk so note down the size of the vdisk and here we will ignore hda as it is iso

Note: Bring down respective VM before taking snapshot.


5.       Create VM snapshot for the both the vdisks

6.       Save the VM snapshot to some NFS location

 

Actual Patching process:

Before starting the patching process make sure DB and DB system and crs cluster service should be running.

 

Run below commands from BM server

1.       Update repository with server patch

#unzip p35938481_1923000_Linux-x86-64.zip             >>> The zip file will contain oda-sm-19.23.0.0.0-date-server.zip and readme.html file

 

2.       Update the repository with the server software file

# odacli update-repository -f /tmp/oda-sm-19.23.0.0.0-date-server.zip

 

3.       Confirm that the repository update is successful

                              # odacli describe-job -i <job_ID>           >>> every odacli commands generate some                                         job id

4.       Update DCS admin/Components/Agents

4.1   Update DCS admin

                              # odacli update-dcsadmin -v 19.23.0.0.0

4.2   Update the DCS components

# odacli update-dcscomponents -v 19.23.0.0.0 

4.3   Update the DCS agent

#odacli update-dcsagent -v 19.23.0.0.0

 

5.       Unzip below patches:

Oracle Grid Infrastructure clone files (patch 30403673),
Oracle Database clone files (patch 30403662)

 

6.       Update the repository with the Oracle Grid Infrastructure clone file and the Oracle Database clone file:

# odacli update-repository -f /tmp/odacli-dcs-19.23.0.0.0-date-GI-19.23.0.0.zip

# odacli update-repository -f /tmp/odacli-dcs-19.23.0.0.0-date-DB-19.23.0.0.zip

 

7.       Create prepatch report for server

# odacli create-prepatchreport -s -v 19.23.0.0.0

 

8.       Verify that the patching pre-checks ran successfully

# odacli describe-prepatchreport -i <job ID>        

Note: Fix the warnings and errors mentioned in the report and proceed with the server patching.

 

9.       Apply the server update. 

                              # /opt/oracle/dcs/bin/odacli update-server -v 19.23.0.0.0

 

10.   Confirm that the server update is successful

        # /opt/oracle/dcs/bin/odacli describe-job -i <job_ID>

 

Run below commands from DB server

11.   Update DCS admin/Components/Agents

11.1           Update DCS admin

                              # odacli update-dcsadmin -v 19.23.0.0.0

11.2           Update the DCS components

# odacli update-dcscomponents -v 19.23.0.0.0

11.3           Update the DCS agent

#odacli update-dcsagent -v 19.23.0.0.0

 

12.   Update the repository with the Oracle Grid Infrastructure clone file and the Oracle Database clone file:

# odacli update-repository -f /tmp/odacli-dcs-19.23.0.0.0-date-GI-19.23.0.0.zip

# odacli update-repository -f /tmp/odacli-dcs-19.23.0.0.0-date-DB-19.23.0.0.zip

 

13.   Apply the server update. 

                              # /opt/oracle/dcs/bin/odacli update-server -v 19.23.0.0.0               >>> verify it by                                               using command “#odacli describe-component”

14.   Create prepatch report for DB home

# odacli create-prepatchreport --dbhome --dbhomeid d8307f2e-c126-41da-ab6a-1a7f23c5c074 -v 19.23.0.0.0


We can get dbhome id by using command “#odacli list-databases”

 

15.   Verify the pre-patch report; it should be successful.

Note down the job id and run below command to verify

#odacli describe-jobs -I <job-id>

 

16.   Update db_home

# odacli update-dbhome --id d8307f2e-c126-41da-ab6a-1a7f23c5c074 -v 19.23.0.0.0 -f 

 

17.   Note down the job id and observe the status; it should be successful.


18.   Verify patching by using command:

# odacli describe-component        >>> we can see here version 19.23



Regards,

Kiren Jadhav



Tuesday, November 12, 2024

How to assign ILOM IP to the new Solaris server + Solaris 11

How to assign ILOM IP to the new Solaris server + Solaris 11

Suppose there is requirement of new server. After rack mounting the server, assigning power cable, network etc.… we can assign ILOM IP address to access the server from outside DC.

Prerequisites: Note down the server make and model. Console cable is required to take ILOM serial connection from laptop. Take the ILOM IP address, gateway and netmask details from the network team.


1.     Log in to ILOM using a serial connection

2.     Connect your laptop and NET MGMT port using console cable.

3.       From laptop, in search bar, type “Computer Management” and search.

4.       Computer Management >> Device Management >> Ports (COM and LPT)

5.       Take putty session and select Serial and mention the COM port details which we got in step4. Mention appropriate buad rate or speed. In maximum cases we mention speed as 9600.

The speed depends on the hardware details, so node down the make, model of hardware and search speed accordingly.

6.     Press Enter to establish a connection between your serial console and ILOM. A login prompt to ILOM appears.

7.       Now we can assign ILOM IP address.

           7.1 #cd /SP/network

           7.2 Type the following commands to configure a static Ethernet configuration

                  --> set pendingipdiscovery=static

                 --> set pendingipaddress= <IP address>

                  --> set pendingipnetmask= <Netmask address>

                  --> set pendingipgateway= <Gateway address>

                  --> set commitpending=true

                  --> set state=enable

            7.3 show /SP/network      >> To show network properties; we can see assigned ILOM IP

8. Now we will be able to take ILOM connection from web interface.



Regards,

Kiren Jadhav

Tuesday, July 30, 2024

How to create bootable USB in solaris

 How to create bootable USB (pen drive) in Solaris 

Why it is required: Sometime there is requirement to boot server using bootable iso media to resolve issues like root password recovery, new installation, server is in maintenance mode, booting server from iso etc. 

Steps:

1. Search in google "Solaris 11.4 iso download". Depending on server hardware (SPARC/X86) select appropriate option. Suppose we have SPARC hardware then we will download SPARC USB Text Installer

If we want to download iso on server for new installation then download below ISO image

SPARC Text Installer 

2. After downloading SPARC USB Text Installer, copy it to any working test Solaris server.

root@cdom3:~# ls -lrth
-rw-r--r--   1 root     root        1.1G Jul 18 13:09 sol-11_4-text-sparc.usb
root@cdom3:~#

3. Attach bootable media (pendrive) to the test server to copy image properly on it.


root@cdom3:~# usbcopy sol-11_4-text-sparc.usb
Image type: dd-able Sparc
Found the following USB devices:
0:      /dev/rdsk/c9t0d0s2      14.7 GB SanDisk  Cruzer Blade     1.00
Enter the number of your choice: 0

WARNING: All data on your USB storage will be lost.

Are you sure you want to install to
SanDisk Cruzer Blade 1.00, 14700 MB at /dev/rdsk/c9t0d0s2 ?  (y/n) y
Copying and verifying image to USB device
Finished 1160 MB in 234 seconds (4.9MB/s)
Successfully completed copy to USB
root@cdom3:~#

Note : We will se message like "Successfully completed copy to USB"

4. As our bootable device is ready with the iso image, remove USB from the test server.

5. Attach the USB to the server which is in maintenance mode. Take login prompt, you will get ok prompt.

Run below commands to scan the devices connected to the servers. Here we are booting server from USB.

{0} ok probe-scsi-all       

Target 9
  Unit 0   Disk   HITACHI  H109030SESUN300G A606    585937500 Blocks, 300 GB
  SASDeviceName 5000cca043487328  SASAddress 5000cca043487329  PhyNum 0
Target a
  Unit 0   Disk   HITACHI  H109030SESUN300G A606    585937500 Blocks, 300 GB
  SASDeviceName 5000cca043487400  SASAddress 5000cca043487401  PhyNum 1

/pci@340/pci@1/pci@0/pci@3/usb@0/hub@8/storage@1
  Unit 0   Removable Disk     SanDiskCruzer Blade1.00

{0} ok devalias

{0} ok show-disks
a) /reboot-memory@0
b) /pci@380/pci@1/pci@0/pci@7/SUNW,qlc@0,1/fp@0,0/disk
c) /pci@380/pci@1/pci@0/pci@7/SUNW,qlc@0/fp@0,0/disk
d) /pci@380/pci@1/pci@0/pci@6/SUNW,qlc@0,1/fp@0,0/disk
e) /pci@380/pci@1/pci@0/pci@6/SUNW,qlc@0/fp@0,0/disk
f) /pci@3c0/pci@1/pci@0/pci@2/scsi@0/disk
g) /pci@300/pci@1/pci@0/pci@2/scsi@0/disk
h) /pci@340/pci@1/pci@0/pci@3/usb@0/hub@8/storage@1/disk
i) /iscsi-hba/disk
q) NO SELECTION
Enter Selection, q to quit: h
/pci@340/pci@1/pci@0/pci@3/usb@0/hub@8/storage@1/disk has been selected.
Type ^Y ( Control-Y ) to insert it in the command line.
e.g. ok nvalias mydev ^Y
         for creating devalias mydev for /pci@340/pci@1/pci@0/pci@3/usb@0/hub@8/storage@1/disk


{0} ok devalias
screen                   /pci@380/pci@1/pci@0/pci@3/display@0
primary-vds0             /virtual-devices@100/channel-devices@200/virtual-disk-server@0
primary-vsw0             /virtual-devices@100/channel-devices@200/virtual-network-switch@0
primary-vc0              /virtual-devices@100/channel-devices@200/virtual-console-concentrator@0
net3                     /pci@3c0/pci@1/pci@0/pci@1/network@0,1
net2                     /pci@3c0/pci@1/pci@0/pci@1/network@0
disk5                    /pci@3c0/pci@1/pci@0/pci@2/scsi@0/disk@p1
disk4                    /pci@3c0/pci@1/pci@0/pci@2/scsi@0/disk@p0
scsi1                    /pci@3c0/pci@1/pci@0/pci@2/scsi@0
cdrom                    /pci@3c0/pci@1/pci@0/pci@2/scsi@0/disk@p3
net1                     /pci@300/pci@1/pci@0/pci@1/network@0,1
net                      /pci@300/pci@1/pci@0/pci@1/network@0
net0                     /pci@300/pci@1/pci@0/pci@1/network@0
disk3                    /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p3
disk2                    /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p2
disk1                    /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p1
disk                     /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0
disk0                    /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0
scsi                     /pci@300/pci@1/pci@0/pci@2/scsi@0
scsi0                    /pci@300/pci@1/pci@0/pci@2/scsi@0
virtual-console          /virtual-devices/console@1
name                     aliases


{0} ok boot mydev
Boot device: /pci@300/pci@1/pci@0/pci@2/scsi@0/disk@p0  File and args: mydev
/
Can't open mydev

 

Evaluating:
The file just loaded does not appear to be executable.
{0} ok boot /pci@340/pci@1/pci@0/pci@3/usb@0/hub@8/storage@1/disk



Regards,

Kiren Jadhav

Friday, June 21, 2024

How to mirror root disks in Solaris 11

 How to mirror root disks in Solaris 11


Mirroring root disks is used for redundancy purpose i.e. incase of failure of 1 disk the data will be accessible from other disks. If we want to mirror the root disk in rpool then we need to follow below process.

In this example we are mirroring root disk which are locally present on server. We can mirror the disks which are coming from external storage. 


1.       To know the disks attached to the server.

#echo |format






1.       Check rpool status. Here it is showing only 1 disk is present in rpool.

#zpool status rpool






1.       Attach one more disk of same size to rpool.

#zpool attach -f rpool <disk1> <New disk_disk2>




1.       Resilvering will start on new disk and till that time you will see the particular pool in Degraded state.

 

What is resilvering: Resilvering operation is moving data from the good copies to the new device.







1.       Once resilvering done the pool will have status ONLINE and you will be see 2 disks in mirrored.






Thanks,

Kiren Jadhav

Unable to do root login even after entering correct root password in solaris 11


Unable to ssh root user even after entering correct root password in solaris 11


After doing OS installation on physical server or ldom/zones, we try to login using root user but sometimes it fails even though we use correct root password. And if we try to login via console using same password then we will be able to login.

This means there is nothing to do with password, something need to be changed in sshd configuration file. Here is the solution:

1.       Note down PermitRootLogin value in /etc/ssh/sshd_config file



1.       Take backup of /etc/ssh/sshd_config file.

#cp -rp /etc/ssh/sshd_config /etc/ssh/sshd_config_15mar2024

 

2.       Edit sshd_config file using vi editor and change PermitRootLogin and PasswordAuthentication value to yes

1




Restart ssh service



Thanks,

Kiren Jadhav

Thursday, June 20, 2024

How to delete/unconfigure ldoms in solaris 11

 How to delete/unconfigure ldoms in solaris 11:

1.       In this example, we are having primary domain and 1 logical domain.

List all the LDOMs which are running on primary domain.

ldm ls

 

2.       Stop all LDOMs which are running.

2.1 to stop all running ldoms; it will bring LDOMs in bound mode

ldm stop-domain -a   

2.2   unbound all the LDOMs

ldm unbind-domain -a

2.3   remove all ldoms and its resources from primary domain

ldm remove-domain -a


3.       List all services:

To list all services which are given from service domain.

ldm list-services


4.       Remove all 3 services:

4.1 Remove vds service

ldm remove-vds primary-vds0

If we directly remove vds service then it will get failed as we can see primary-vds0 disk has been added to LDOM testdb, so remove it first.

 

Below command it will remove forcefully virtual disk service added to any LDOM

ldm remove-vcc primary-vcc0

4.2 Remove vsw0 service.

ldm remove-vsw primary-vsw0

              4.3 Remove vcc0 service.

ldm remove-vcc primary-vcc0

 

5.       Check all sp-config file and try to restart primary domain using factory-default spconfig file.

 

ldm ls-spconfig
ldm set-config factory-default

 

6.       Now restart/shutdown server so it will take booting config as factory-default.

shutdown -y -i6 -g0


Thanks,

Kiren Jadhav

How to reduce coredump size in Solaris 11

How to reduce coredump size in Solaris 11.

Suppose rpool is full and we don’t have any scope for further housekeeping then we can try to reduce the size of coredump to free up some space in rpool.

If we reduce the size of coredump directly it will give us error and ask to do it forcefully but doing it forcefully may cause data loss.


Below are the steps to do reduce coredump size.

#dumpadm     >>> to get info on current dump device and savecore directory

# zfs get volsize rpool/dump       >>> To get vol size of dump device

#zfs create -V 10G rpool/dump1       >>> creating new dump volume of size 10G (suppose previous dumpsize is 20G)

# dumpadm -d /dev/zvol/dsk/rpool/dump1          >>> set the dump device to new volume

#dumpadm             >>> We can see new dump device

#zfs list |grep -i dump                >>> We can see 2 dump devices here

#zfs destroy rpool/dump                       >>> destroy old dump device


Thanks,

Kiren Jadhav