- What we know
- What we've created
- Hints and Kinks
- Checking Corosync cluster membership
- Configuring radosgw to behave like Amazon S3
- Downgrading to DRBD 8.3
- Fencing in Libvirt/KVM virtualized cluster nodes
- Fencing in VMware virtualized Pacemaker nodes
- GFS2 in Pacemaker (Debian/Ubuntu)
- Interleaving in Pacemaker clones
- Maintenance in active Pacemaker clusters
- Managing cron jobs with Pacemaker
- Mandatory and advisory ordering in Pacemaker
- Migrating virtual machines from block-based storage to RADOS/Ceph
- Network connectivity check in Pacemaker
- OCFS2 in Pacemaker (Debian/Ubuntu)
- Solid-state drives and Ceph OSD journals
- Solve a DRBD split-brain in 4 steps
- Testing Pacemaker clusters
- Totem "Retransmit List" in Corosync
- Turning Ceph RBD Images into SAN Storage Devices
- Which OSD stores a specific RADOS object?
- Ceph Tutorial (LCA 2013)
- Ceph: The Storage Stack for OpenStack (OpenStack Israel 2013)
- Die eigene Cloud mit OpenStack Essex (German, LinuxTag 2012)
- Fencing (LCE 2011)
- GlusterFS in HA Clusters (LCEU 2012)
- GlusterFS und Ceph (German, CeBIT 2012)
- Hands-On With Ceph (LCEU 2012)
- High Availability Update (OpenStack Summit Fall 2012)
- High Availability in OpenStack (CloudOpen 2012)
- High Availability in OpenStack (OpenStack Conference Spring 2012)
- Highly Available Cloud: Pacemaker integration with OpenStack (OSCON 2012)
- Mit OpenStack zur eigenen Cloud (German, CLT 2012)
- Mit OpenStack zur eigenen Cloud (German, OSDC 2012)
- More Reliable, More Resilient, More Redundant (OpenStack Summit April 2013)
- MySQL HA Deep Dive (MySQL Conference 2012)
- MySQL High Availability Deep Dive (PLUK 2012)
- MySQL High Availability Sprint (PLUK 2011)
- OpenStack Essex im Praxistest (German, Linuxwochen Wien 2012)
- OpenStack High Availability Update (Grizzly and Havana)
- Roll Your Own Cloud (LCA 2011)
- Storage Replication in HPHA (LCA 2012)
- Zen of Pacemaker (LCA 2012)
- hastexo in 100 Seconds
- Technical documentation
- News releases
- Hints and Kinks
- What we charge
- What others say
"GlusterFS", the Gluster ant logo, "Red Hat" and "Red Hat Storage" are trademarks or registered trademarks of Red Hat, Inc. hastexo is not affiliated with the trademark owner.
GlusterFS is a scalable, distributed, replicated filesystem in userspace. hastexo provides expert professional services around GlusterFS, including GlusterFS training and consulting services.
GlusterFS is a multiple-node, read-write accessible, optionally replicated and distributed filesystem implemented in FUSE (Filesystem in Userspace). Originally developed at Gluster, Inc., which was acquired by Red Hat in 2011, it now forms the core of the Red Hat Storage product line. The filesystem is fully open source, and ships with many Linux distributions.
GlusterFS supports automatic replication of files across multiple storage areas (called bricks) in an n-way capacity. This makes it possible to configure replicated volumes with 2, 3, or any number of replicas. Replication is synchronous and self-healing – when nodes temporarily drop off the cluster, their data automatically resynchronizes on the next file access. All replicas are of course writable.
In GlusterFS distributed volumes, individual files are placed on bricks in a balanced fashion. This allows GlusterFS filesystems to scale out seamlessly. GlusterFS comes with a deterministic distribution algorithm which makes any central instance for file placement lookup unnecessary. This greatly enhances GlusterFS scalability.
Distribution can be combined with replication to ensure simultaneous scaleout and redundancy – an extremely useful configuration for High Availability solutions.
Striping allows GlusterFS to spread individual chunks of files over multiple bricks, and present them in a unified fashion. This, too, is a useful scalability feature.
With asynchronous geographic replication, GlusterFS supports additional data redundancy in a backup datacenter or cloud hosted instances. Geo-replication is built on the rock-solid rsync algorithm and comes with built-in encryption via SSH tunnelling.
Geo-replication supports configurations with one master and multiple slaves, and even daisy-chained, cascading replication is supported.
Built-in NFS support
Besides its native FUSE client, GlusterFS comes with built-in support for NFS, making it a scalable drop-in replacement for the native Linux NFS server. Any TCP capable NFSv3 client can access NFS-exported GlusterFS volumes.
How we can help you with GlusterFS
Our team has expert knowledge in GlusterFS and its integration in high availability solutions based on the Pacemaker stack. We offer remote and on-site consultancy services around GlusterFS, and GlusterFS is also a key topic in our hastexo High Availability Expert training offering.