Wednesday, December 23, 2020

How to install molecule and create role and scenario ?

 

What is molecule?


No one is perfect. We all make mistakes but we can avoid many of them with careful planning.

Molecule project is designed to aid in the development and testing of Ansible roles.

Molecule provides support for testing with multiple instances, operating systems and distributions, virtualization providers, test frameworks and testing scenarios.

 

Using Molecule is much simpler. We  can create multiple scenarios for testing configuration variances and we can use Docker images for all the common Operating Systems and versions ensuring they are all thoroughly tested.ng they are all thoroughly tested.



Installation

Molecule is written in python and distributed as a pip. In most cases it can be easily installed with the pip command. I am using Docker to test my roles and I therefore need to install the Molecule package with Docker support.

pip install molecule

pip install 'molecule[docker]'

If you have trouble with the installation process check the very well documented steps here.

 

How to init a role

I’m going to make this section quite practical so you can follow through the steps and set up your own roles with Molecule.

First thing is to initialise the role. There are two different ways depending on whether you’re updating an existing role or creating a brand new role.


How to create New Role

 

The Molecule init command creates not just  the Molecule directory but also the Ansible one.

 



 

 

Update Existing

 If on the other hand you are updating an existing role, the following command create only the directory molecule with the most basic configuration.

 



 Molecule contents

Let’s first of all examine the contents of the molecule directory:

 




 

The first thing you’ll notice is there is a default directory. This defines your test scenario. For instance, it allows you to test the same role using different configurations. Inside the default directory there are several files. We’re interested in just the molecule.yml and playbook.yml

 

Let’s look first at molecule.yml

 

---

# use Ansible Galaxy to get other modules

dependency:

  name: galaxy

# we'll be running our tests inside a docker container

driver:

  name: docker

# linter to check file systax; try "molecule lint" to test it

lint:

  name: yamllint

# docker images to use, you can have as many as you'd like

platforms:

  - name: instance

    image: centos:7

# run ansible in the docker containers

provisioner:

  name: ansible

  lint:

    name: ansible-lint

# Molecule handles role testing by invoking configurable verifiers.

verifier:

  name: testinfra

  lint:

    name: flake8

 


 

And this is playbook.yml which is a standard Ansible playbook to invoke your role.

 ---

- name: Converge

  hosts: all

  roles:

    - role: nginx

 

First test

You can test your initial setup simply by running molecule create. This command creates the docker images where Ansible will be running your playbooks. Another handy command is ‘molecule login’ to open a shell in to the running docker container for  you to perform debugging for example

 



  

First role

 

I’m going to assume you are creating a new role for this lab. I’m doing a simple nginx installation. My first change is to make the role install nginx when it runs, adding the following to tasks/main.yml

 

You may have noticed the default image on molecule is centos7. Feel free to change it to whatever image you prefer but I’m sticking to defaults for the moment and I’ll be showing how to test for multiple distributions shortly.

 

 

- name: Install EPEL 

  package:

    name: epel-release

    state: present

  when: ansible_os_family == "RedHat"

 

- name: Install nginx

  package:

    name: nginx

    state: present

 

 

Now you can simply run ‘molecule converge’ and the default scenario will run to test your role.





 

 

 

As you can see the role completed successfully!

 

You can use the command ‘molecuel test’  instead which would run through every single step in molecule such as linting, converging, clean up, destroy, etc. But for our tests converge alone is much faster and lint is unlikely to succeed as we haven’t edited the meta/* configurations.

 

 

 

Testing for multiple Operating System

The above role has only been tested for CentOS version 7. But this role may be used on Ubuntu or Debian and we should therefore test them as well. We do this by adding the different images to the platforms section in the molecule.yml file config

 

platforms:

  - name: CentOS-7

    image: centos:7

  - name: Ubuntu-18

    image: ubuntu:18.04

 

 

 

As simple as that the’molecule converge’ command will now create two docker images, one for CentOS 7 and one for Ubuntu 18.08 and run Ansible with our brand new role in both.

 

You will need to execute molecule destroy after you change the molecule.yml for Molecule to pick up the new configurations.


 

 


Conclusion

Molecule can be of great help to ensure your roles are up to the best of standards before you tag them for use. It will help you ensuring quality code is used and it works (at least in isolation). It doesn’t mean it will be all perfect, you will rarely be running a single role on your playbook but it should simplify your debugging when problems occur.

 

Friday, August 7, 2015

VMware Tools does not start after updating the kernel in RHEL 6


VMware Tools does not start after updating the kernel in RHEL 6


Issue:

VMware Tools does not start after updating the kernel with boot log messages:

Checking acpi hot plug                                     [  OK  ]
Starting VMware Tools services in the virtual machine:
Switching to guest configuration:                       [  OK  ]
VM communication interface:                             [FAILED]
VM communication interface socket family:          [FAILED]
Guest operating system daemon:                         [  OK  ]

 

·         Vmware tools not running after kernel patches are installed

·         VMTool not working

·         After latest OS patch set installed VM coming up without NIC interfaces

·         After upgrading the kernel some of my devices no longer work

 

Environment

  • Red Hat Enterprise Linux (RHEL) 5 Virtual Machine
  • Red Hat Enterprise Linux (RHEL) 6 Virtual Machine
  • VMware ESX Hypervisor

 

Resolution

  • After updating the kernel, VMware tools just needs to be reconfigured -- not reinstalled
    Run the 
    vmware-config-tools.pl script (from inside the RHEL guest) to update the initramfs/initrd

# vmware-config-tools.pl

NOTE: There is a high possibility that running this script will interupt network services on the server; run it from the console during regularly-scheduled downtime to prevent problems with applications and users

Root Cause

  • In order for VMware-specific virtual hardware (e.g.: VMXNET3 ethernet controller) to work with a new kernel, initial ramdisk (initramfs in RHEL 6 and initrd in RHEL 5 & earlier) needs to be rebuilt to include the appropriate VMware-specific kernel modules (drivers)

Diagnostic Steps

  • Check for VMware specific hardware in the output of an lspci command

 

Saturday, March 28, 2015

Email Exchanging on Linux system using Sendmail+Dovecot+Squirrelmail
 
ØSend message from one computer to another computer using  Mail Server around the world.
ØThere are many servers are used to exchange these email  among one network to different network
ØMail server is responsible to send and receive mail from  one client   to another client.

Wednesday, March 20, 2013

how to rename the interface in linux


Rename network interfaces
Prerequisites
§  ifrename tool which is a part of the wireless_tools package.
§  udev package, which is already installed, of course.
Generic network interfaces
Option 1: udev
Create /etc/udev/rules.d/10-network.rules file with content like following:
 SUBSYSTEM=="net", ATTR{address}=="00:1e:58:48:33:08", NAME="lan"
 SUBSYSTEM=="net", KERNEL=="tap1", NAME="vpn"
 SUBSYSTEM=="net", KERNEL=="tap5", NAME="qemu"
Note: Make sure to use the lower-case hex values in your udev rules. It doesn't like uppercase.
Note: Note that the example above avoids names such as eth0, eth1, etc... and instead uses names that are not initially assigned. Trying to rename using names like eth0, eth1, etc... may fail.
Note: If using systemd, you may find it necessary to inform your network service to hold until the network device has been renamed. This may be achieved by adding the following to the [Unit] section of your network.service file:
 Requires=systemd-udev-settle.service
 After=systemd-udev-settle.service
Option 2: ifrename
Run ifrename directly
 # ifrename -i eth0 -n lan
or create config file (/etc/iftab), for example:
lan            mac 00:0C:6E:C6:94:81
internet       mac 00:0C:6E:C6:94:82
and run
 # ifrename -c /etc/iftab
Example:
How to rename the interface in linux using ifrename 


__tmp469514561 Link encap:Ethernet  HWaddr 00:1F:29:60:B6:A5
__tmp1982053100 Link encap:Ethernet  HWaddr 00:26:55:D6:91:1B


Create a iftable file and include the new NIc with mac address and execute the ifrename command mentioned below

itab - 
eth12 mac 00:26:55:d6:91:1b
eth14 mac 00:1f:29:60:b6:a5



ifrename -i eth0 -n lan
ifrename -i __tmp469514561 -n eth14
ifrename -i __tmp1982053100 -n eth12




PPP interfaces
Add into /etc/ppp/ip-up script the following lines:
 IF=$1
 ip link set dev $IF down
 /usr/sbin/ifrename -i $IF -n <NEWNAME>
 ip link set dev <NEWNAME> up
where <NEWNAME> is the new name for the ppp interface





Wednesday, February 13, 2013

Removing multipath disks


Removing multipath disks

in one of our RHEL4 blade servers, i need to remove
1x500G HP XP1000 LUN:
PV         VG             Fmt  Attr PSize   PFree
  /dev/dm-17 lx0080apps00vg lvm2 a-   200.00G  11.50G
  /dev/dm-18 lx0080apps00vg lvm2 a-   200.00G  11.50G
  /dev/dm-19 lx0080apps02vg lvm2 a-   500.00G 100.00G
  /dev/dm-20 lx0080apps01vg lvm2 a-   250.00G  50.00G
  /dev/dm-21 lx0080apps01vg lvm2 a-   100.00G  50.00G
  /dev/dm-22 lx0080apps01vg lvm2 a-   200.00G  10.00G
  /dev/dm-23 lx0080apps01vg lvm2 a-   200.00G  10.00G
  /dev/dm-24 lx0080apps01vg lvm2 a-   200.00G  10.00G
  /dev/sda2  vg00           lvm2 a-    67.88G   4.47G
[root@lx0080 ~]# pvdisplay /dev/dm-19
--- Physical volume ---
  PV Name               /dev/dm-19
  VG Name               lx0080apps02vg
  PV Size               500.00 GB / not usable 1.25 MB
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              128000
  Free PE               25600
  Allocated PE          102400
  PV UUID               O9b3hw-oSUO-pUpe-oGJw-zEBp-tKXo-3ba2g9
so here's how to do it without causing disruptions on the machine:
btw, this machine hosts a test environment.

CAUTION: DON'T DO THIS IN PRODUCTION ENVIRONMENT ON A REGULAR
BUSINESS DAY (Even if you are sure it won't cause any problems)

make sure no i/o activity on the disk
- i've removed it from nfs exports
- unmounted the FS
remove from VG group
remove from multipath definitions
delete from system

below here are the actual steps.

1. remove from VG group

[root@lx0080 ~]#
 vgremove lx0080apps02vg
Found duplicate PV O9b3hwoSUOpUpeoGJwzEBptKXo3ba2g9: using /dev/sdb1 not /dev/sdq1
Do you really want to remove volume group "lx0080apps02vg" containing 1 logical volumes? [y/n]: y
Do you really want to remove active logical volume "lx0080apps02LV"? [y/n]: y
  Logical volume "lx0080apps02LV" successfully removed
  Volume group "lx0080apps02vg" successfully removed

pvs
PV         VG             Fmt  Attr PSize   PFree
  /dev/dm-17 lx0080apps00vg lvm2 a-   200.00G  11.50G
  /dev/dm-18 lx0080apps00vg lvm2 a-   200.00G  11.50G
  /dev/dm-19                lvm2 --   500.00G 500.00G 
/dev/dm-20 lx0080apps01vg lvm2 a-   250.00G  50.00G
  /dev/dm-21 lx0080apps01vg lvm2 a-   100.00G  50.00G
  /dev/dm-22 lx0080apps01vg lvm2 a-   200.00G  10.00G
  /dev/dm-23 lx0080apps01vg lvm2 a-   200.00G  10.00G
  /dev/dm-24 lx0080apps01vg lvm2 a-   200.00G  10.00G
  /dev/sda2  vg00           lvm2 a-    67.88G   4.47G
as you can see - /dev/dm-19 is now an orphan. since its the only 500G
LUN in the server, its easy to identify.

2. remove from multipath

from multipath -ll (other entries removed):
mpath8 (360060e8004a51a000000a51a00000200)
[size=500 GB][features="1 queue_if_no_path"][hwhandler="0"]
\_ round-robin 0 [active]
 \_ 3:0:0:0 sdb 8:16   [active][ready]
 \_ 4:0:0:0 sdq 65:0   [active][ready]
it has two active paths.

multipath -f mpath8

(-f flush a multipath device map specified as parameter, if unused)

[root@lx0080 ~]#
 multipath -f mpath8
[root@lx0080 ~]#
 multipath -ll | grep 360060e8004a51a000000a51a00000200


360060e8004a51a000000a51a00000200 is the UUID of the disk from
scsi_id -gus /block/sdb (or /block/sdq)

now the multipath map is removed. but the disk is still in the system.

[root@lx0080 ~]#
 fdisk -l /dev/sdq
Disk /dev/sdq: 536.8 GB, 536877465600 bytes
255 heads, 63 sectors/track, 65271 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdq1               1       65271   524289276   8e  Linux LVM
[root@lx0080 ~]# fdisk -l /dev/sdb
Disk /dev/sdb: 536.8 GB, 536877465600 bytes
255 heads, 63 sectors/track, 65271 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       65271   524289276   8e  Linux LVM
[root@lx0080 ~]#

[root@lx0080 ~]#
 pvs
Found duplicate PV O9b3hwoSUOpUpeoGJwzEBptKXo3ba2g9: using /dev/sdb1 not /dev/sdq1
  PV         VG             Fmt  Attr PSize   PFree
  /dev/dm-17 lx0080apps00vg lvm2 a-   200.00G  11.50G
  /dev/dm-18 lx0080apps00vg lvm2 a-   200.00G  11.50G
  /dev/dm-20 lx0080apps01vg lvm2 a-   250.00G  50.00G
  /dev/dm-21 lx0080apps01vg lvm2 a-   100.00G  50.00G
  /dev/dm-22 lx0080apps01vg lvm2 a-   200.00G  10.00G
  /dev/dm-23 lx0080apps01vg lvm2 a-   200.00G  10.00G
  /dev/dm-24 lx0080apps01vg lvm2 a-   200.00G  10.00G
  /dev/sda2  vg00           lvm2 a-    67.88G   4.47G
  /dev/sdb1                 lvm2 --   500.00G 500.00G

3. delete from the system

[root@lx0080 ~]#
 echo 1 > /sys/block/sdb/device/delete
[root@lx0080 ~]#
 echo 1 > /sys/block/sdq/device/delete
[root@lx0080 ~]#
 dmesg
Synchronizing SCSI cache for disk sdb:
Synchronizing SCSI cache for disk sdq:
[root@lx0080 ~]# fdisk -l /dev/sdb
[root@lx0080 ~]#
 fdisk -l /dev/sdq


here are some entries generated thru syslog:

- when multipath -f was run:
Apr 22 09:17:20 lx0080 multipathd: dm map mpath8 removed
Apr 22 09:17:20 lx0080 udevd[2444]: udev done!
- when deleting sdb
Apr 22 09:22:52 lx0080 kernel: Synchronizing SCSI cache for disk sdb:
Apr 22 09:22:52 lx0080 multipathd: remove sdb path checker
Apr 22 09:22:52 lx0080 udevd[2444]: udev done!

- when deleting sdq
Apr 22 09:22:57 lx0080 kernel: Synchronizing SCSI cache for disk sdq:
Apr 22 09:22:57 lx0080 multipathd: remove sdq path checker
Apr 22 09:22:57 lx0080 udevd[2444]: udev done!
Apr 22 09:23:02 lx0080 udevd[2444]: udev done!

our storage admin can now remove the disk definition from storage
server.