- What we know
- What we've created
- Hints and Kinks
- Checking Corosync cluster membership
- Configuring radosgw to behave like Amazon S3
- Downgrading to DRBD 8.3
- Fencing in Libvirt/KVM virtualized cluster nodes
- Fencing in VMware virtualized Pacemaker nodes
- GFS2 in Pacemaker (Debian/Ubuntu)
- Interleaving in Pacemaker clones
- Maintenance in active Pacemaker clusters
- Managing cron jobs with Pacemaker
- Mandatory and advisory ordering in Pacemaker
- Migrating virtual machines from block-based storage to RADOS/Ceph
- Network connectivity check in Pacemaker
- OCFS2 in Pacemaker (Debian/Ubuntu)
- Solid-state drives and Ceph OSD journals
- Solve a DRBD split-brain in 4 steps
- Testing Pacemaker clusters
- Totem "Retransmit List" in Corosync
- Turning Ceph RBD Images into SAN Storage Devices
- Which OSD stores a specific RADOS object?
- Ceph Tutorial (LCA 2013)
- Ceph: The Storage Stack for OpenStack (OpenStack Israel 2013)
- Die eigene Cloud mit OpenStack Essex (German, LinuxTag 2012)
- Fencing (LCE 2011)
- GlusterFS in HA Clusters (LCEU 2012)
- GlusterFS und Ceph (German, CeBIT 2012)
- Hands-On With Ceph (LCEU 2012)
- High Availability Update (OpenStack Summit Fall 2012)
- High Availability in OpenStack (CloudOpen 2012)
- High Availability in OpenStack (OpenStack Conference Spring 2012)
- Highly Available Cloud: Pacemaker integration with OpenStack (OSCON 2012)
- Mit OpenStack zur eigenen Cloud (German, CLT 2012)
- Mit OpenStack zur eigenen Cloud (German, OSDC 2012)
- More Reliable, More Resilient, More Redundant (OpenStack Summit April 2013)
- MySQL HA Deep Dive (MySQL Conference 2012)
- MySQL High Availability Deep Dive (PLUK 2012)
- MySQL High Availability Sprint (PLUK 2011)
- OpenStack Essex im Praxistest (German, Linuxwochen Wien 2012)
- OpenStack High Availability Update (Grizzly and Havana)
- Roll Your Own Cloud (LCA 2011)
- Storage Replication in HPHA (LCA 2012)
- Zen of Pacemaker (LCA 2012)
- hastexo in 100 Seconds
- Technical documentation
- News releases
- Hints and Kinks
- What we charge
- What others say
Fencing in Libvirt/KVM virtualized cluster nodes
Often, people deploy the Pacemaker stack in virtual environments for purposes of testing and evaluation. In such environments, it's easy to test Pacemaker's fencing capabilities by tying in with the hypervisor.
This quick howto illustrates how to configure fencing for two virtual cluster nodes hosted on a libvirt/KVM hypervisor host.
libvirt configuration (hypervisor)
In order to do libvirt fencing, your hypervisor should have its libvirtd daemon listen on a network socket. libvirtd is capable of doing this, both on an encrypted TLS socket, and on a regular, unencrypted TCP port. Needless to say, for production use you should only use TLS, but for testing and evaluation – and for that purpose only – TCP is fine.
In order for your hypervisor to listen on an unauthenticated, insecure, unencrypted network socket (did we mention that's unsuitable for production?), add the following lines to your libvirtd configuration file:
listen_tls = 0 listen_tcp = 1 tcp_port = "16509" auth_tcp = "none"
You can also set the
listen_addr parameter, for example to have libvirtd listen only on the network that your virtual machines run in. If you don't set
listen_addr, libvirtd will simply listen on the wildcard address.
You'll also have to add the
--listen flag to your libvirtd invocation. On Debian/Ubuntu platforms, you can do so by editing the
/etc/default/libvirt-bin configuration file.
Once you've done that, you can use
netstat -ltp to check whether libvirtd is in fact listening on its configured port, 16509/tcp. Also, make sure that you don't have a firewall blocking that port.
libvirt configuration (virtual machines)
Inside your virtual machines, you'll also have to install the libvirt client binaries – the fencing mechanism uses the
virsh utility under the covers. Some platforms provide a libvirt-client package for that purpose; for other's, you'll simply have to install the full libvirt package.
Once that is set up, you should be able to run this command from inside your virtual machines:
virsh --connect=qemu+tcp://<IP of your hypervisor>/system \ list --all
... and that command should list all the domains running on that host, including the one you're connecting from.
In one of your virtual machines, you can now set up your fencing configuration.
This example assumes that you have two nodes named alice and bob, that their corresponding virtual machine domain names are also alice and bob, and that they can reach their hypervisor by TCP at 192.168.0.1:
primitive p_fence_alice stonith:external/libvirt \ params hostlist="alice" \ hypervisor_uri="qemu+tcp://192.168.0.1/system" \ op monitor interval="60" primitive p_fence_bob stonith:external/libvirt \ params hostlist="bob" \ hypervisor_uri="qemu+tcp://192.168.0.1/system" \ op monitor interval="60" location l_fence_alice p_fence_alice -inf: alice location l_fence_bob p_fence_bob -inf: bob property stonith-enabled=true
Now you can test fencing to the best of your abilities.