- Who we are
- What we know
- What we do
- hastexo Academy
- Cloud Fundamentals for OpenStack (HX101), Bengaluru, KA, India
- Networking for OpenStack (HX102), Bengaluru, KA, India
- Swift Distributed Storage for OpenStack (HX105), Bengaluru, KA, India
- Cloud Fundamentals for OpenStack (HX101), Munich, Germany
- Networking for OpenStack (HX102), Munich, Germany
- Orchestration and Scaling for OpenStack (HX107), Munich, Germany
- Cloud Fundamentals for OpenStack (HX101), São Paulo, SP, Brazil
- Networking for OpenStack (HX102), São Paulo, SP, Brazil
- Ceph Distributed Storage for OpenStack (HX104), São Paulo, SP, Brazil
- Cloud Fundamentals for OpenStack (HX101)
- Networking for OpenStack (HX102)
- High Availability for OpenStack (HX103)
- Ceph Distributed Storage for OpenStack (HX104)
- Swift Distributed Storage for OpenStack (HX105)
- Metering and Monitoring for OpenStack (HX106)
- Orchestration and Scaling for OpenStack (HX107)
- Remote Consultancy
- On-Site Consultancy
- Custom Training
- Availability Checkup
- Ask The Expert Now!
- hastexo Academy
- What we've created
- Hints and Kinks
- Checking Corosync cluster membership
- Configuring radosgw to behave like Amazon S3
- Downgrading to DRBD 8.3
- Fencing in Libvirt/KVM virtualized cluster nodes
- Fencing in VMware virtualized Pacemaker nodes
- Fun with extended attributes in Ceph Dumpling
- GFS2 in Pacemaker (Debian/Ubuntu)
- Interleaving in Pacemaker clones
- Maintenance in active Pacemaker clusters
- Managing cron jobs with Pacemaker
- Mandatory and advisory ordering in Pacemaker
- Migrating virtual machines from block-based storage to RADOS/Ceph
- Network connectivity check in Pacemaker
- OCFS2 in Pacemaker (Debian/Ubuntu)
- Solid-state drives and Ceph OSD journals
- Solve a DRBD split-brain in 4 steps
- Testing Pacemaker clusters
- Totem "Retransmit List" in Corosync
- Turning Ceph RBD Images into SAN Storage Devices
- Unrecoverable unfound objects in Ceph 0.67 and earlier
- Which OSD stores a specific RADOS object?
- Ceph Tutorial (LCA 2013)
- Ceph: The Storage Stack for OpenStack (OpenStack Israel 2013)
- Die eigene Cloud mit OpenStack Essex (German, LinuxTag 2012)
- Fencing (LCE 2011)
- GlusterFS in HA Clusters (LCEU 2012)
- GlusterFS und Ceph (German, CeBIT 2012)
- Hands-On With Ceph (LCEU 2012)
- High Availability Update (OpenStack Summit Fall 2012)
- High Availability in OpenStack (CloudOpen 2012)
- High Availability in OpenStack (OpenStack Conference Spring 2012)
- Highly Available Cloud: Pacemaker integration with OpenStack (OSCON 2012)
- Mit OpenStack zur eigenen Cloud (German, CLT 2012)
- Mit OpenStack zur eigenen Cloud (German, OSDC 2012)
- More Reliable, More Resilient, More Redundant (OpenStack Summit April 2013)
- MySQL HA Deep Dive (MySQL Conference 2012)
- MySQL High Availability Deep Dive (PLUK 2012)
- MySQL High Availability Sprint (PLUK 2011)
- OpenStack & Ceph (Ceph Day Frankfurt 2014)
- OpenStack Essex im Praxistest (German, Linuxwochen Wien 2012)
- OpenStack High Availability Update (Grizzly and Havana)
- OpenStack Tour de Force (OSCON 2013)
- Roll Your Own Cloud (LCA 2011)
- Storage Replication in HPHA (LCA 2012)
- Zen of Pacemaker (LCA 2012)
- hastexo in 100 Seconds
- Technical documentation
- News releases
- hastexo announces hastexo Academy
- Inktank & hastexo announce partnership on Ceph (German)
- Inktank & hastexo announce partnership on Ceph
- SkySQL, hastexo Form Highly Available Partnership
- The OpenStack DACH Day 2013 (German)
- hastexo Becomes OpenStack Corporate Sponsor, Expands OpenStack Training Portfolio
- hastexo, Cloudscaling announce training collaboration
- hastexo, GigaSpaces announce training partnership
- Hints and Kinks
- What we charge
- What others say
We have collected multiple years of experience with setting up and maintaining Corosync-based clusters. We're actively involved in its development by sending bug reports and patches. We're frequently in touch with the Corosync core developers.
If you're looking for help with Corosync, we will deliver.
What's it good for?
Corosync provides a reliable communications layer for high availability clusters. It ensures that cluster nodes can send and receive messages over multiple, redundant communication paths ("rings"). Corosync supports multiple transports, such as multicast UDP, unicast UDP, and native InfiniBand.
Corosync runs on on all recent Linux distributions. It's the standard cluster communication layer on SLES, RHEL, Ubuntu and Debian GNU/Linux.
How is it used?
The Pacemaker cluster resource management framework uses Corosync as its preferred communications layer. Corosync is also the only supported comm layer in CMAN clusters, also known as Red Hat Cluster Suite or, currently, Red Hat Enterprise Linux High-Availability Add-On.
Corosync is also the communications layer for Sheepdog, a distributed storage platform for KVM, Proxmox VE (as of version 2), and the Apache Qpid cross-platform messaging system.
How is it related to OpenAIS?
Corosync grew out of the OpenAIS project, a 2006 implementation of the Service Availability Forum's standard Application Interface Specification (AIS). The project later split into an infrastructure platform (Corosync) and an interface/plugin layer which confusingly retained the name OpenAIS.
Development on OpenAIS has now practically ceased as applications switched from invoking the AIS-compatible layer to making direct Corosync library calls. Corosync development continues to thrive.
How is it related to Heartbeat?
Corosync shares no code with another cluster communications layer, Heartbeat. The Pacemaker cluster resource manager currently supports both stacks (although it prefers Corosync), and Heartbeat/Pacemaker clusters can even switch to Corosync/Pacemaker with no service interruption.
Other cluster resource managers, such as CMAN (Red Hat Cluster), support Corosync exclusively.
If you're stuck with Corosync, we can help you out. We can offer a wide array of Corosync consulting services. Talk to one of us within the next 15 minutes. Ask The Expert Now!