- Who we are
- What we know
- What we do
- hastexo Academy
- Cloud Fundamentals for OpenStack (HX101), Munich, Germany
- Networking for OpenStack (HX102), Munich, Germany
- High Availability for OpenStack (HX103), Munich, Germany
- Cloud Fundamentals for OpenStack (HX101), São Paulo, SP, Brazil
- Networking for OpenStack (HX102), São Paulo, SP, Brazil
- Ceph Distributed Storage for OpenStack (HX104), São Paulo, SP, Brazil
- Cloud Fundamentals for OpenStack (HX101), Bengaluru, KA, India
- Networking for OpenStack (HX102), Bengaluru, KA, India
- Swift Distributed Storage for OpenStack (HX105), Bengaluru, KA, India
- Cloud Fundamentals for OpenStack (HX101)
- Networking for OpenStack (HX102)
- High Availability for OpenStack (HX103)
- Ceph Distributed Storage for OpenStack (HX104)
- Swift Distributed Storage for OpenStack (HX105)
- Metering and Monitoring for OpenStack (HX106)
- Orchestration and Scaling for OpenStack (HX107)
- Remote Consultancy
- On-Site Consultancy
- Custom Training
- Availability Checkup
- Ask The Expert Now!
- hastexo Academy
- What we've created
- Hints and Kinks
- Checking Corosync cluster membership
- Configuring radosgw to behave like Amazon S3
- Downgrading to DRBD 8.3
- Fencing in Libvirt/KVM virtualized cluster nodes
- Fencing in VMware virtualized Pacemaker nodes
- GFS2 in Pacemaker (Debian/Ubuntu)
- Interleaving in Pacemaker clones
- Maintenance in active Pacemaker clusters
- Managing cron jobs with Pacemaker
- Mandatory and advisory ordering in Pacemaker
- Migrating virtual machines from block-based storage to RADOS/Ceph
- Network connectivity check in Pacemaker
- OCFS2 in Pacemaker (Debian/Ubuntu)
- Solid-state drives and Ceph OSD journals
- Solve a DRBD split-brain in 4 steps
- Testing Pacemaker clusters
- Totem "Retransmit List" in Corosync
- Turning Ceph RBD Images into SAN Storage Devices
- Which OSD stores a specific RADOS object?
- Ceph Tutorial (LCA 2013)
- Ceph: The Storage Stack for OpenStack (OpenStack Israel 2013)
- Die eigene Cloud mit OpenStack Essex (German, LinuxTag 2012)
- Fencing (LCE 2011)
- GlusterFS in HA Clusters (LCEU 2012)
- GlusterFS und Ceph (German, CeBIT 2012)
- Hands-On With Ceph (LCEU 2012)
- High Availability Update (OpenStack Summit Fall 2012)
- High Availability in OpenStack (CloudOpen 2012)
- High Availability in OpenStack (OpenStack Conference Spring 2012)
- Highly Available Cloud: Pacemaker integration with OpenStack (OSCON 2012)
- Mit OpenStack zur eigenen Cloud (German, CLT 2012)
- Mit OpenStack zur eigenen Cloud (German, OSDC 2012)
- More Reliable, More Resilient, More Redundant (OpenStack Summit April 2013)
- MySQL HA Deep Dive (MySQL Conference 2012)
- MySQL High Availability Deep Dive (PLUK 2012)
- MySQL High Availability Sprint (PLUK 2011)
- OpenStack Essex im Praxistest (German, Linuxwochen Wien 2012)
- OpenStack High Availability Update (Grizzly and Havana)
- OpenStack Tour de Force (OSCON 2013)
- Roll Your Own Cloud (LCA 2011)
- Storage Replication in HPHA (LCA 2012)
- Zen of Pacemaker (LCA 2012)
- hastexo in 100 Seconds
- Technical documentation
- News releases
- hastexo announces hastexo Academy
- Inktank & hastexo announce partnership on Ceph (German)
- Inktank & hastexo announce partnership on Ceph
- SkySQL, hastexo Form Highly Available Partnership
- The OpenStack DACH Day 2013 (German)
- hastexo Becomes OpenStack Corporate Sponsor, Expands OpenStack Training Portfolio
- hastexo, Cloudscaling announce training collaboration
- hastexo, GigaSpaces announce training partnership
- Hints and Kinks
- What we charge
- What others say
Cluster file systems are complex and challenging. They are sometimes required if you're building huge network storage pools, or if you want to run clustered instances of certain applications. You typically deploy them together with a cluster manager like Pacemaker.
A cluster file system is not a cure-all. It requires careful consideration and a perfect technical setup. And if you're using it incorrectly, it'll do you more harm than good.
We're well versed in the pros and cons of available cluster file systems, and we'll be happy to share!
Which one do I choose?
Numerous F/OSS cluster file systems exist: OCFS2, GFS2, GlusterFS, Lustre ... They all have their specific advantages and disadvantages. Hands-on experience is vital to determine the best solution for a certain set of requirements. And even if you've found the perfect match, there's still the setup and the integration into the cluster manager that needs to be done.
How about locking and fencing?
Cluster filesystems need a way to ensure that a file is not changed on two nodes of the storage network simultaneously. They normally use a Distributed Locking Manager (DLM), which needs to be set up appropriately.
Cluster filesystems need a reliable fencing setup in case things go wrong. Fencing is a means of reliably separating misbehaving nodes from shared resources. Fencing is usually almost trivial to set up, but complex and challenging to reliably test.
What if I'm not sure if I need one?
Cluster filesystems fit a very clearly defined, narrow range of applications perfectly. However, we have frequently seen them deployed without necessity -- and based on our experience we can usually suggest simpler and more robust solutions.
Prior to deploying a cluster filesystem, we invite you to discuss your setup with us. Let us check whether a cluster filesystem is the only way to go. Or do you already have a cluster filesystem in place and are not satisfied with it?
Need help with OCFS2, GFS2, Gluster, Ceph, or Lustre? We can offer a broad array of consulting and training services for cluster filesystems. Ask The Expert Now!