Labels

hpunix (63) marathi kavita (52) linux (21) solaris11 (10) AWS (5) numerology (5)

Thursday, September 8, 2022

Lucky Mobile number - Numerology

Find your Lucky Mobile number:


How to know which mobile number is Lucky for you. Here are few tips for choosing Lucky mobile number.

1. Consult a Good Numerologist to know your lucky number. Suppose your are having 5 and 7 combination (Where 5 Birth number and 7 Destiny Number) then your lucky number is 1 and 6

2. The total of your mobile number should come on your lucky number.

3. No Need to add country code in the total of your all numbers.

4. Only 1 , 3 and 6 numbers can be repetitive in your mobile number means you can have multiple 1's or 3's or 6's in your mobile numbers.

   The 6 number is of Venus which shows luxury, opportunities, money. If they come in multiple time then we can say it will bring more money, luxury etc.. and who don't want more luxury :-)

5. Try to avoid 2,4,5,7,8,9,0 numbers coming multiple times in your mobile number. If they come one time then it is OK.

6. Try to avoid mobile number total of 28 , 64, 82 though 1 is your lucky number.

I'll explain for 28 , why to avoid this total.

2 represents Moon and 8 represents Saturn and combination of both will make the the person moody (due to number 2) and give slowness and struggle (due to number 8) due to conflicting energies of both numbers.

Also 2 number is for moon and water the same way number 8 is for Saturn and Iron. Water and Iron is not good combination; the water will rust iron so better to avoid it.  In other words, having a mobile  number as 28 is likely to cause lots of conflicts and hinder success.


Regards,

Kiran Jaadhav

Thursday, July 21, 2022

Password less ssh for root user from compute node to all cell nodes

 Password less ssh for root user from compute node to all cell nodes:


Instead of copying the public key from the compute nodes to the autorizedkeys file of all the cell nodes; below command can be used for password less ssh for root user from the compute nodes to all the cell nodes.

Where /opt/oracle.SupportTools/onecommand/cell_group is the file where list of cellnodes is present.

#dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root -k -s '-o StrictHostKeyChecking=no'  


(Assuming ssh key is already present if not then run command #ssh-keygen -t rsa  )


To Verify password less is happening for all cells using uptime command.

#dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root uptime


===================================================================================

[root@dbadm01 onecommand]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root -k -s '-o StrictHostKeyChecking=no'

root@celadm01's password:

root@celadm02's password:

celadm01: ssh key added

celadm02: ssh key added

[root@dbadm01 onecommand]#

[root@dbadm01 onecommand]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root uptime

hcprd2celadm01: 12:57:01 up 165 days, 20:52,  0 users,  load average: 1.35, 1.91, 1.92

hcprd2celadm02: 12:57:01 up 165 days, 12:49,  0 users,  load average: 2.91, 2.15, 1.97

[root@dbadm01 onecommand]#

===================================================================================


Password less ssh for root user from compute node to all compute nodes:

Where /opt/oracle.SupportTools/onecommand/dbs_group is the file where list of compute nodes mentioned.

#dcli -g /opt/oracle.SupportTools/onecommand/dbs_group -l root -k -s '-o StrictHostKeyChecking=no'

#dcli -g /opt/oracle.SupportTools/onecommand/dbs_group -l root uptime



Regards,

Kiran Jadhav

Wednesday, July 20, 2022

Exadata Patching steps from Beginning

 Exadata Patching steps from start to end: 

* Exadata patching doc number : doc id 888828.1

* Patching sequence: Below patching sequence we have to follow while doing exadata patching:

  1. DB/Grid patching

  2. Cell node patching

  3. Compute node patching

  4. Roce/Ibswitch patching 

Patching steps:

1. Open the doc id 888828.1 in MOS (My Oracle Support) to know the latest (N) or N-1 patch information. Identify the exact patch you want to download

2. Copy the patches to the server ( there will be approx 10 files)

3. After copying patches to the server : 

[root@testJan_2022]# ls -lrth

total 30G

-rw-r--r-- 1 root root 3.1G Apr 22 17:05 p33567288_210000_Linux-x86-64_1of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 17:18 p33567288_210000_Linux-x86-64_3of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 17:27 p33567288_210000_Linux-x86-64_9of10.zip

-rw-r--r-- 1 root root 1.7G Apr 22 17:33 p33567288_210000_Linux-x86-64_10of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 18:18 p33567288_210000_Linux-x86-64_2of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 18:20 p33567288_210000_Linux-x86-64_7of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 18:24 p33567288_210000_Linux-x86-64_5of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 18:58 p33567288_210000_Linux-x86-64_8of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 19:00 p33567288_210000_Linux-x86-64_6of10.zip

-rw-r--r-- 1 root root 3.1G Apr 22 19:06 p33567288_210000_Linux-x86-64_4of10.zip


4. Then unzip all patch files

#unzip p33567288_210000_Linux-x86-64_1of10.zip

.

#unzip p33567288_210000_Linux-x86-64_10of10.zip

or 

#unzip '*.zip'


4. Unzipping files will create tar files something like below:

-rw-r--r-- 1 root root 3.1G Jan 21 15:12 33567288.tar.splitaa

-rw-r--r-- 1 root root 3.1G Jan 21 15:12 33567288.tar.splitab

-rw-r--r-- 1 root root 3.1G Jan 21 15:13 33567288.tar.splitac

-rw-r--r-- 1 root root 3.1G Jan 21 15:13 33567288.tar.splitad


5. Now untar all files to create a common patch repo files:

#cat *.tar.* | tar -xvf -


6. Now unzip patch files from below directory in order to get dbnodeupdate.sh and patchmgr scripts.

#cd /QFSDP/Jan_2022/33567288/Infrastructure/SoftwareMaintenanceTools/DBNodeUpdate/21.211221

#cd /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/FabricSwitch

#cd /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/ExadataStorageServer_InfiniBandSwitch


7. Once DB team confirm that they are done with the DB/GRID patching then we can start cellnode patching.

8. Before doing actual patching we have to raise the prechecks SR in MOS and need to upload necessary logs like sosreport and sundiag from compute nodes and cell nodes, exachk from one of the compute node and the prechecks logs from compute node and cell nodes.

8.a Below is the cell node prechecks commands:

#cd /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/ExadataStorageServer_InfiniBandSwitch/patch_21.2.8.0.0.220114.1

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group -reset_force

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group -cleanup

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group -patch_check_prereq -rolling -ignore_alerts

Note : If error found in prechecks get it rectified with the help of backend team.

The possible error could be:

https://kiranbjadhav.blogspot.com/2022/05/exadata-cell-node-patching-errorusb.html

8.b If there is no error found in prechecks then we can proceed with the actual patching commands.

Actual patching command :

#cd /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/ExadataStorageServer_InfiniBandSwitch/patch_21.2.8.0.0.220114.1

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group -reset_force

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group -cleanup

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group -patch -ignore_alerts   

all cells patching will get completed (DB needs to be down here)

If you are doing cellnode patching without DB downtime then 

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group -patch -rolling -ignore_alerts   

all cells will get patched in rolling mode one by one.


If you want to do manual cell node patching taking one cell at a time then

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group_1 -patch -rolling -ignore_alerts  ---> 1 cell at a time assuming cell_group_1 has only one cell entry.

#./patchmgr -cells /opt/oracle.SupportTools/onecommand/cell_group_2 -patch -rolling -ignore_alerts and so on


9. Once cell nodes patching completed successfully then we can start with Compute node patching 

9.a Compute node patching prechecks

#cd /QFSDP/Jan_2022/33567288/Infrastructure/SoftwareMaintenanceTools/DBNodeUpdate/21.211221

#./dbnodeupdate.sh -u -l /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/ExadataDatabaseServer_OL7/p33665705_212000_Linux-x86-64.zip -v 

The possible error could be:

https://kiranbjadhav.blogspot.com/2022/05/exadata-compute-node-patching-prechecks.html


9.b Actual patching command: 

Prerequisite:

. Once compute node at a time and 

. DB and CRS must be down on the particular compute node

. NFS mount point should be unmounted

#./dbnodeupdate.sh -u -l /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/ExadataDatabaseServer_OL7/p33665705_212000_Linux-x86-64.zip


9.c After successful compute node upgrade run below command to finish post steps.

#./dbnodeupdate.sh -c 


10. After successful compute node patching we can do ROCE switch or IBswitch patching

10.a Roce switch patching prechecks:

#cd /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/FabricSwitch/patch_switch_21.2.8.0.0.220114.1

#./patchmgr --roceswitches /roceswitches.lst --upgrade --roceswitch-precheck --log_dir /scratchpad/


10.b Actual patching command:

./patchmgr --roceswitches /roceswitches.lst --upgrade --log_dir /scratchpad/


10.c If ibswitches are there instead of Roce switches

#cd /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/ExadataStorageServer_InfiniBandSwitch/patch_21.2.8.0.0.220114.1

  ./patchmgr -ibswitches /opt/oracle.SupportTools/onecommand/ibs_group -upgrade -ibswitch_precheck


10.d Actual patching command:

./patchmgr -ibswitches /opt/oracle.SupportTools/onecommand/ibs_group -upgrade



Regards,

Kiran Jaadhav


Tuesday, May 24, 2022

Exadata cell node patching error_USB stayed back at old image

Exadata cell node patching error_USB stayed back at old image


 It seem USB stayed back at old image 20.1.9 though image was upgraded to 21.2.5; We can verify it in "imageinfo" command by taking login on problematic cell node.

#imageinfo

The Active image version and the Cell boot usb version should be same. if it different then the cell node patching takes time and it get fails with the error.


In this case, We need to rebuild USB using the below action plan which can be done online.

1. It may be required to stop the MS service to run this command.

#cellcli -e alter cell shutdown services MS

2. Recreate USB image with the following commands:

# cd /opt/oracle.SupportTools

# ./make_cellboot_usb -verbose -force


3. Remember to re-enable this once the make_cellboot_usb has completed.

#cellcli -e alter cell startup services MS


4. Once the execution is complete, validate configuration:

#cellcli -e alter cell validate configuration


5. Cross-verify the version
#imageinfo     ---> Active image version and the Cell boot usb version will be same.

6. Run cell node prechecks command and check if it still gives error.


Regards,
Kiran B Jaadhav

Monday, May 16, 2022

How to disable transparent_hugepage in linux

 How to disable transparent_hugepage


Here it shows that the transparent_hugepage value is enabled.

[root@testsrv~]# cat /sys/kernel/mm/transparent_hugepage/enabled

[always] madvise never


Steps:

1. Take backup of /etc/default/grub

#cp -rp /etc/default/grub /etc/default/grub_16May2022 


2. Edit the file /etc/default/grub by adding transparent_hugepage=never to the line 

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=vg_os/lv_root rd.lvm.lv=vg_os/lv_swap rhgb quiet"

so the entry will be like 

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=vg_os/lv_root rd.lvm.lv=vg_os/lv_swap rhgb quiet transparent_hugepage=never"


3. Take backup of file grub.cfg

#cp -pv   /boot/efi/EFI/redhat/grub.cfg  /boot/efi/EFI/redhat/grub.cfg-bkp

4. Create new grub configuration file

#grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg 

5. Reboot the server

#reboot

6. Verify whether the value is changed or not

#cat /sys/kernel/mm/transparent_hugepage/enabled

[root@testsrv~]# cat /sys/kernel/mm/transparent_hugepage/enabled

always madvise [never]


Regards,

Kiran B Jaadhav

Friday, May 13, 2022

Exadata Compute node patching prechecks error_Inactive lvm

Exadata Compute node patching prechecks:


Below commands will be used for Exadata compute node/DB node patching prechecks:

Note : Here we are applying QFSDP patch which is released on Jan2022

#cd /QFSDP/Jan_2022/33567288/Infrastructure/SoftwareMaintenanceTools/DBNodeUpdate/21.211221

#./dbnodeupdate.sh -u -l /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/ExadataDatabaseServer_OL7/p33665705_212000_Linux-x86-64.zip -v 


While doing patching prechecks on compute node if you get below error then 

Error :


  ERROR: Inactive lvm (/dev/mapper/VGExaDb-LVDbSys2) (15G) not equal to active lvm /dev/mapper/VGExaDb-LVDbSys1 (30G). Backups will fail. Re-create it with proper size


Ans:

1. The error clearly shows that there is size mismatch between LV /dev/mapper/VGExaDb-LVDbSys2 and /dev/mapper/VGExaDb-LVDbSys1

2. The inactive root LV is having 15GB size and the active root LV is having 30GB size.

   we can check it by using command

   #lvs

3. Make the inactive root LV same as size of active root LV

4. Command:

#lvextend -L +15G /dev/mapper/VGExaDb-LVDbSys2

5. Verify using command:

#lvs


Regards,

Kiran B Jaadhav

Friday, May 6, 2022

How to calculate Memory utilization on Linux server

 How to calculate Memory utilization on Linux Server:


[root@prddb1 root]# free -m

              total        used        free      shared  buff/cache   available

Mem:        6190278      981277     2684757       29303     2524243     5073648

Swap:         16383         596       15787

[root@prddb1 root]#


Here the values are in MB,

Total Memory = 6190278 

Free Memory or Available Memory = 5073648

So the memory utilization will be calculated using below way:

Memory Utilization = [(Total Memory - Available Memory )/Total Memory] *100

i.e

Memory utilization = [ 6190278 - 5073648)/6190278 ]*100 = 0.1804*100 = 18%

This means the current or real time total memory utilization is 18%


Regards,

Kiran B Jaadhav

Sunday, January 2, 2022

How to stop scrubbing in ZFS

 How to stop scrubbing in ZFS:

The simplest way to check data integrity is to initiate an explicit scrubbing of all data within the pool. This operation traverses all the data in the pool once and verifies that all blocks can be read.  

1. Command to check scrubbing is running or not? and how much time it will take to finish scrubbing:

#zpool status

Above command will show you pool name on which scrubbing is running.

========================================

Scrubbing might negatively impact performance, increases I/O operation. So sometime we need to stop the scrubbing.

2. Command to stop scrubbing. 

#zpool scrub -s <pool name>


Regards,

Kiren Jadhav