Labels

hpunix (63) marathi kavita (52) linux (21) solaris11 (11) AWS (5) numerology (5)
Showing posts with label linux; redhat. Show all posts
Showing posts with label linux; redhat. Show all posts

Tuesday, May 24, 2022

Exadata cell node patching error_USB stayed back at old image

Exadata cell node patching error_USB stayed back at old image


 It seem USB stayed back at old image 20.1.9 though image was upgraded to 21.2.5; We can verify it in "imageinfo" command by taking login on problematic cell node.

#imageinfo

The Active image version and the Cell boot usb version should be same. if it different then the cell node patching takes time and it get fails with the error.


In this case, We need to rebuild USB using the below action plan which can be done online.

1. It may be required to stop the MS service to run this command.

#cellcli -e alter cell shutdown services MS

2. Recreate USB image with the following commands:

# cd /opt/oracle.SupportTools

# ./make_cellboot_usb -verbose -force


3. Remember to re-enable this once the make_cellboot_usb has completed.

#cellcli -e alter cell startup services MS


4. Once the execution is complete, validate configuration:

#cellcli -e alter cell validate configuration


5. Cross-verify the version
#imageinfo     ---> Active image version and the Cell boot usb version will be same.

6. Run cell node prechecks command and check if it still gives error.


Regards,
Kiran B Jaadhav

Monday, May 16, 2022

How to disable transparent_hugepage in linux

 How to disable transparent_hugepage


Here it shows that the transparent_hugepage value is enabled.

[root@testsrv~]# cat /sys/kernel/mm/transparent_hugepage/enabled

[always] madvise never


Steps:

1. Take backup of /etc/default/grub

#cp -rp /etc/default/grub /etc/default/grub_16May2022 


2. Edit the file /etc/default/grub by adding transparent_hugepage=never to the line 

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=vg_os/lv_root rd.lvm.lv=vg_os/lv_swap rhgb quiet"

so the entry will be like 

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=vg_os/lv_root rd.lvm.lv=vg_os/lv_swap rhgb quiet transparent_hugepage=never"


3. Take backup of file grub.cfg

#cp -pv   /boot/efi/EFI/redhat/grub.cfg  /boot/efi/EFI/redhat/grub.cfg-bkp

4. Create new grub configuration file

#grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg 

5. Reboot the server

#reboot

6. Verify whether the value is changed or not

#cat /sys/kernel/mm/transparent_hugepage/enabled

[root@testsrv~]# cat /sys/kernel/mm/transparent_hugepage/enabled

always madvise [never]


Regards,

Kiran B Jaadhav

Friday, May 13, 2022

Exadata Compute node patching prechecks error_Inactive lvm

Exadata Compute node patching prechecks:


Below commands will be used for Exadata compute node/DB node patching prechecks:

Note : Here we are applying QFSDP patch which is released on Jan2022

#cd /QFSDP/Jan_2022/33567288/Infrastructure/SoftwareMaintenanceTools/DBNodeUpdate/21.211221

#./dbnodeupdate.sh -u -l /QFSDP/Jan_2022/33567288/Infrastructure/21.2.8.0.0/ExadataDatabaseServer_OL7/p33665705_212000_Linux-x86-64.zip -v 


While doing patching prechecks on compute node if you get below error then 

Error :


  ERROR: Inactive lvm (/dev/mapper/VGExaDb-LVDbSys2) (15G) not equal to active lvm /dev/mapper/VGExaDb-LVDbSys1 (30G). Backups will fail. Re-create it with proper size


Ans:

1. The error clearly shows that there is size mismatch between LV /dev/mapper/VGExaDb-LVDbSys2 and /dev/mapper/VGExaDb-LVDbSys1

2. The inactive root LV is having 15GB size and the active root LV is having 30GB size.

   we can check it by using command

   #lvs

3. Make the inactive root LV same as size of active root LV

4. Command:

#lvextend -L +15G /dev/mapper/VGExaDb-LVDbSys2

5. Verify using command:

#lvs


Regards,

Kiran B Jaadhav

Friday, May 6, 2022

How to calculate Memory utilization on Linux server

 How to calculate Memory utilization on Linux Server:


[root@prddb1 root]# free -m

              total        used        free      shared  buff/cache   available

Mem:        6190278      981277     2684757       29303     2524243     5073648

Swap:         16383         596       15787

[root@prddb1 root]#


Here the values are in MB,

Total Memory = 6190278 

Free Memory or Available Memory = 5073648

So the memory utilization will be calculated using below way:

Memory Utilization = [(Total Memory - Available Memory )/Total Memory] *100

i.e

Memory utilization = [ 6190278 - 5073648)/6190278 ]*100 = 0.1804*100 = 18%

This means the current or real time total memory utilization is 18%


Regards,

Kiran B Jaadhav

Friday, September 3, 2021

/ mountpoint increase in OEL (Oracle Enterprise Linux)

/ mountpoint increase in OEL (Oracle Enterprise Linux)


Below are the steps for increasing / mountpoint in OEL:

Step 1 :

Notedown the LV and VG details of / mountpoint. Here for root mountpoint we have mirror lvs LVDbSys1 and LVDbSys2. The size of LVDbSys1 is 30GB and suppose we are going to increase it by 6GB.

root@jk root]# lvs

LV                 VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert

LVDbSwap1          VGJkDb -wi-ao----  24.00g

LVDbSys1           VGJkDb -wi-a-----  30.00g

LVDbSys2           VGJkDb -wi-ao----  30.00g

LVDoNotRemoveOrUse VGJkDb -wi-a-----   1.00g

[root@jk root]#

[root@jk root]# vgs

VG      #PV #LV #SN Attr   VSize    VFree

VGJkDb   2   5   0 wz--n- <834.89g <549.89g


Step 2: Here the primary root disk LV is LVDbSys2, so will increase it first by using lvextend command.

[root@jk root]# lvextend -L +6G /dev/mapper/VGJkDb-LVDbSys2

Size of logical volume VGJkDb/LVDbSys2 changed from 30.00 GiB (7680 extents) to 36.00 GiB (9216 extents).

Logical volume VGJkDb/LVDbSys2 successfully resized.

[root@jk root]#

[root@jk root]# resize2fs /dev/mapper/VGJkDb-LVDbSys2

resize2fs 1.42.9 (28-Dec-2013)

Filesystem at /dev/mapper/VGJkDb-LVDbSys2 is mounted on /; on-line resizing required

old_desc_blocks = 2, new_desc_blocks = 3

The filesystem on /dev/mapper/VGJkDb-LVDbSys2 is now 9437184 blocks long.

[root@jk root]#

[root@jk root]# df -h /

Filesystem                    Size  Used Avail Use% Mounted on

/dev/mapper/VGJkDb-LVDbSys2   36G   24G  9.8G  71% /


Step 3 : Now increase mirror lv of root ie LVDbSys1:

[root@jk root]# lvextend -L +6G /dev/mapper/VGJkDb-LVDbSys1

Size of logical volume VGJkDb/LVDbSys1 changed from 30.00 GiB (7680 extents) to 36.00 GiB (9216 extents).

Logical volume VGJkDb/LVDbSys1 successfully resized.

[root@jk root]#

[root@jk root]# resize2fs /dev/mapper/VGJkDb-LVDbSys1

resize2fs 1.42.9 (28-Dec-2013)

Please run 'e2fsck -f /dev/mapper/VGJkDb-LVDbSys1' first.


Step 4: While doing resize2fs on secondary root lv, the command is saying to run e2fsk on the secondary root lv.

[root@jk root]# e2fsck -f /dev/mapper/VGJkDb-LVDbSys1

e2fsck 1.42.9 (28-Dec-2013)

Pass 1: Checking inodes, blocks, and sizes

Pass 2: Checking directory structure

Pass 3: Checking directory connectivity

Pass 4: Checking reference counts

Pass 5: Checking group summary information

/dev/mapper/VGJkDb-LVDbSys1: 111758/1966080 files (0.1% non-contiguous), 5772289/7864320 blocks

Step 5 : once it is successfull, run resize2fs command on LVDbSys1:

[root@jk root]# resize2fs /dev/mapper/VGJkDb-LVDbSys1

resize2fs 1.42.9 (28-Dec-2013)

Resizing the filesystem on /dev/mapper/VGJkDb-LVDbSys1 to 9437184 (4k) blocks.

The filesystem on /dev/mapper/VGJkDb-LVDbSys1 is now 9437184 blocks long.


Step 6 : Now run lvs command, will see increase LV size for LVDbSys2 and LVDbSys1:

[root@jk root]# lvs

LV                 VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert

LVDbSwap1          VGJkDb -wi-ao----  24.00g

LVDbSys1           VGJkDb -wi-a-----  36.00g

LVDbSys2           VGJkDb -wi-ao----  36.00g

LVDoNotRemoveOrUse VGJkDb -wi-a-----   1.00g


Step 7 : Note down the increase size of / mount point in df command:

[root@jk root]# df -h /

Filesystem                    Size  Used Avail Use% Mounted on

/dev/mapper/VGJkDb-LVDbSys2   36G   24G  9.8G  71% /

[root@jk root]#



Regards,

Kalynajit 

Thursday, June 24, 2021

sar is not capturing logs in Linux

 sar is not capturing logs in Linux

sar (System Activity Report) is monitoring tools which monitors system performance like CPU/mem/IO utilization etc.

The sar captures data on every 10mins interval and the log files are getting stored under /var/log/sa. We can see files generated on every day basis and the log rotaion happens depending on cron set under /etc/cron.d/sysstat.

Supoose the log files are not getting generated in the /var/log/sa directory.


Check the sysstat service status and if it is not running then try to restart the service.

#service sysstat status --> to check service status

#service sysstat status --> to restart service

or

#sysctl restart sysstat


Regards,

Kiran Jaddhav

Monday, October 26, 2020

/boot housekeeping + linux

/boot is almost full + linux


It is recommended to have at least 1GB of space for /boot.  


The safest way of cleaning up /boot is :


Do the following to keep just the last 2 kernels on your system, to keep /boot clean.

1 - Edit /etc/yum.conf and set the following parameter

# vi /etc/yum.conf

installonly_limit=2

This will make your package manager keep just the 2 last kernels on your system(including the one that is running)

 

2 - Install yum-utils utility so the package-cleanup command will work:

# yum install yum-utils

 

3 - Make an oldkernel cleanup:

# package-cleanup --oldkernels --count=2

4. This will clear lots of space. If not, then we can note down the current kernel version and can move the other kernels files to another directory.

To know current version of kernel:

# uname -r

Or

# rpm -qa kernel

Move other kernel files to another directory

# mv /boot/<file name> /root/oldkernels

 

Regards,

Kirraan Jadhav


Tuesday, August 18, 2020

scriptlet failed, exit status 127 + RPM error

RPM error "scriptlet failed, exit status 127"


Sometime while patching the server, we may get message to delete some patches which create conflict.

And while deleting those conflicting rpms we get error something like this:

E
[root@cloud home]# rpm -e --nodeps ILMT-TAD4D-agent.i386-cs-cam.noarch

/var/tmp/rpm-tmp.XIuSM4[137]: /var/itlm/utilities/cit/wcitinst: not found [No such file or directory]

error: %preun(ILMT-TAD4D-agent-7.5-1.i386) scriptlet failed, exit status 127


We are erasing the rpms using -e option and not removing the rpm dependency   (Reason : The dependencies could be part of other rpms and by removing it we may encounter problem for other rpms). 
--nodeps Do not verify dependencies
  

 Solution:

 Delete the rpm using --noscripts option. The --noscripts option tells the rpm command not to run any uninstallation scripts

--noscripts  = Do not execute verification script

[root@cloud home]# rpm -e --nodeps  --noscripts ILMT-TAD4D-agent.i386

Regards,
Kalyanjit

Wednesday, December 12, 2018

ec2-user is not able to do sudo - Linux + AWS


ec2-user is not able to do become root user:

ec2-user is not able to do sudo or ec2-user is not able become root user.

I was getting below error when I logged in as ec2-user with the .ppk file and then I was trying to become root user by using #sudo –i


[ec2-user@cloud_home]# sudo : effective uid is not 0, is sudo installed setuid root ?

While doing initial troubleshooting, we got to know that someone has changed the /usr/bin/sudo file permission accidentally. [But even he was unaware about that]

Answer :


1.      Login to AWS console and stop the instance
2.      Detach the root disk (/dev/sda1)
3.      Attach it to any running server [who are in same availability zone] as data disk [/dev/xvdg – device name of the disk given by AWS at the time of attaching it]
4.      Start the Instance from AWS console

5.      mount the disk on OS
[root@cloud_home2]# mount /dev/xvdg2 -o no-uuid /mnt ; cd  /mnt ; ls -l usr/bin/sudo

6.      Change permission of /usr/bin/sudo
[root@cloud_home2]# chmod u+s /mnt/usr/bin/sudo ; ls -l usr/bin/sudo

7.      Un-mount temporary mount point
[root@cloud_home2]# umount /mnt

8.      Detach the disk from this server
9.      And re-attach to original server as root disk where we are getting problem.
10. Start instance
11. Login with ec2-user

12. Now try to do sudo and this time you will be able to it.

[ec2-user@cloud_home]# sudo –i
[root@cloud_home]#

Regards,
Kirraan Jadhav


Saturday, December 8, 2018

How to uninstall package/rpm in Linux


How to uninstall package/rpm in Linux:


rpm (redhat package manager) is a powerful tool which can be used to build, install, query, verify, update, and remove/erase individual software packages. 

Below commands can be used to remove packages or rpm:


[root@cloud_home]# rpm –ev [Package Name]

-e = to erase specified package name or rpm

If we want to remove package without removing that package dependency then 

[root@cloud_home]# rpm –ev –-nodeps [Package Name]

Or we can remove package by using yum command as well.
[root@cloud_home]# yum remove [Package Name]

Above commands has given by assuming that we already know the exact package name.

If you don’t know the exact package name then to find out it use below command To get the package name. In below example we are finding package name for httpd:


[root@cloud_home]# rpm –q httpd


Regards,
Kirraan Jadhav

Wednesday, November 21, 2018

ssh login issue - linux


The user is getting below error when he is trying to login to server via putty:

Error: disconnected no supported authentication methods available (server sent publickey)

The user is getting login prompt and so after entering username he is trying to hit enter so he can use his password, but the screen shows the error as mentioned above.

Ans: There are two methods for authentication
1.     Password authentication
2.     ssh key authentication

After entering username the server will authenticate the user first via password and if it not successful then it will ask for key. The authentication is getting done by checking /etc/ssh/sshd_config file

The possibilities could be that in sshd_config file we might have disabled the password authentication and so server is trying to check public key as authentication from password is failed. But the user has not provided the key so user got above error.

To resolve this, edit the file /etc/ssh/sshd_config and change line

root@cloud home]# vi /etc/ssh/sshd_config
PasswordAuthentication no


to

root@cloud home]# PasswordAuthentication yes


and restart the ssh daemon to re-read sshd_config file

root@cloud home]# service sshd restart




Regards,
Kiiran B Jadhav

Thursday, May 10, 2018

How to start ncpa (Nagios) service - Linux


How to start ncpa (Nagios) service:

The Nagios server will communicate with the host when the Nagios agent/service running on the client server.

If the service is not running then the Nagios server will not be able to communicate with the host, in that case we may need to restart the Nagios service. How to do that?

Here are the steps:

1. nagios_listener service will be responsible for communication.

   Check whether that service is running or not by using systemctl or ps -ef command.





or 

[root@cloud home]# ps -ef |grep -i ncpa



2. If the service is not running then restart it.
[root@cloud home]# systemctl restart ncpa_listener


3. Verify the status of the service by below commands:








or





4. To make this service to be started automatically after server reboot:

4.1 check the chkconfig output:










the ncpa_listener is showing "off" for all run levels.

4.2 Make status of ncpa_listener as "on" so it can start after run level 3
[root@cloud home]# chkconfig ncpa_listener on

4.3 Verify the status:








The ncpa_listener is having "on" value for run level 3, 4 and 5.

Note : Please read more about run levels.

So by following above steps we can restart the ncpa_listener service and make it start automatically after server reboot.


Regards,
Kiran Babu Jadhav


Tuesday, March 6, 2018

How to run cronjob at 5 hours interval + Linux


How to run cronjob at 5 hours interval + Linux

In our day to day task, we get request of cron job scheduling so the script/service will run after certain time interval.

If we want our cronjob (eg. restarting rsyslog service) to run after every 5 hours, then we can edit crontab with below entries:

1. Edit crontab file, it will edit the crontab file of root user.

[root@cloud home]# crontab -e

2Make below entry:

0 */5 * * *

[root@cloud home]# 0 */5 * * * systemctl restart rsyslog

Or

0 5,10,15,20 * * *

[root@cloud home]# 0 5,10,15,20 * * * systemctl restart rsyslog

3 List crontab file:

[root@cloud home]# crontab -l

Crontab fields:
= * - first * shows minutes [0-59]
= * - second * shows hour [0-23]
= * - third * shows Day of month [1-31]
= * - Fourth * shows month [0-12]
= * - Fifth * shows day of week [0-7]


Regards,
Kiiran B Jadhav

Friday, February 16, 2018

Yum repo through NFS method

Yum repo through NFS method :

If we have Repository server (Where all the patches or RPMs (redhat package manager) are stored under some directories(filesystem) then we can  mount them from repo server to destination server through NFS.

In this example:
repohost = servername where all repositories are stored
/repo/rhel6/rhel7-base = RHEL base packages stored here
/repo/rhel6/latest-update = RHEL latest update packages stored here

Note : We can mount entire mount point as well, instead of mounting only above two directories of the mountpoint

/repo/yum & /repo/yum-update = repo mountpoint on client server

On repository server:

On client server:

1. Create mountpoint /repo/yum where we will store base packages

[root@cloud home]# mkdir -p /repo/yum

2. mount exported NFS remote filesystem /repo/rhel6/rhel7-base
[root@cloud home]# mount repohost:/repo/rhel6/rhel7-base /repo/yum

3. Create mount point /repo/yum-update
[root@cloud home]# mkdir -p /repo/yum-update

4. mount exported NFS remote filesystem /repo/rhel7/latest-update
[root@cloud home]# mount repohost:/repo/rhel7/latest-update /repo/yum-update


5. Edit /etc/auto.master file:
[root@cloud home]# vi /etc/auto.master
 /repo   /etc/auto.yum
 
6. Edit /etc/auto.yum file:

[root@cloud home]# vi /etc/auto.yum
yum   -fstype=nfs   repohost:/repo/rhel6/rhel7-base
yum-update   -fstype=nfs   repohost:/repo/rhel7/latest-update

 7. Edit /etc/yum.conf file:

[root@cloud home]# vi /etc/yum.conf
[yum]
name=yum
baseurl=file:///repo/yum
enabled=1
gpgcheck=0

[yum-update]
name=yum-update
baseurl=file:///repo/yum-update
enabled=1
gpgcheck=0

Tuesday, February 13, 2018

How to increase FS which is in LVM in AWS

How to extend EBS volume which is part of LVM:
And
How to increase FS which is in LVM in AWS :

As our day to day task, we may get request to increase filesystem size by some GB value. Suppose that filesystem is part of LVM and the instance is hosted on AWS then below steps you can follow.

1. Fist you have to identify the volume for server
2. Identify the volume from AWS and increase size
3. Make new size visible on server/instance.
4. Extend the mountpoint and hence filesystem to desired value.

Note: Here we are not adding new volume we are using existing volume only. The steps for new volume are different and quite simple too.

In this example, we are going to extend the existing volume /dev/xvdm (current size: 700GB) by 300GB which is part of LVM or belongs to volume group “VolGroup02”. Mount point name is /application.

From AWS End:
  1. Login to AWS console and search our instance with instance ID or instance name.
  2. Check devices attached to respective instance. Check for /dev/sdm (On server, we will see it as /dev/xvdm) volume which we are going to increase by 300GB
  3. Click on volume /dev/sdm and choose modify volume option and increase it by 300GB. New size will be 1000GB.
After modifying the volume from AWS end we have to check on server whether it is modified or not.

From Server End:

1. Check size of our volume on server.
    #pvs |grep -i xvdm


It will not show the new size.


2. Resize the PV so the total size of the volume will be visible
#pvresize /dev/xvdm


 3. Recheck the size of volume; it will show new value now. 




Now we want to extend the mountpoint /application

4. Extend the logical volume by lvextend
    
+100%FREE - It will use 100%  of the free space to extend the volume.



5. Resize the filesystem.

6. Check new size of filesystem.

[root@mycloud ~]# df -h /applicaiton


Regards,
Kiran Jadhaw