- Who we are
- What we know
- What we do
- hastexo Academy
- Cloud Fundamentals for OpenStack (HX101)
- Networking for OpenStack (HX102)
- High Availability for OpenStack (HX103)
- Ceph Distributed Storage for OpenStack (HX104)
- Swift Distributed Storage for OpenStack (HX105)
- Metering and Monitoring for OpenStack (HX106)
- Orchestration and Scaling for OpenStack (HX107)
- Advanced Security for OpenStack (HX108)
- Integrating Microsoft Windows with OpenStack (HX109)
- Database as a Service for OpenStack (HX111)
- Ceph Distributed Storage Fundamentals (HX112)
- Ceph Performance Optimization (HX113)
- Host a Class!
- Remote Consultancy
- On-Site Consultancy
- Custom Training
- Availability Checkup
- Ask The Expert Now!
- hastexo Academy
- What we've created
- Hints and Kinks
- Checking Corosync cluster membership
- Configuring radosgw to behave like Amazon S3
- Downgrading to DRBD 8.3
- Fencing in Libvirt/KVM virtualized cluster nodes
- Fencing in VMware virtualized Pacemaker nodes
- Fun with extended attributes in Ceph Dumpling
- GFS2 in Pacemaker (Debian/Ubuntu)
- Interleaving in Pacemaker clones
- Maintenance in active Pacemaker clusters
- Managing cron jobs with Pacemaker
- Mandatory and advisory ordering in Pacemaker
- Migrating virtual machines from block-based storage to RADOS/Ceph
- Network connectivity check in Pacemaker
- OCFS2 in Pacemaker (Debian/Ubuntu)
- Solid-state drives and Ceph OSD journals
- Solve a DRBD split-brain in 4 steps
- Testing Pacemaker clusters
- Totem "Retransmit List" in Corosync
- Turning Ceph RBD Images into SAN Storage Devices
- Understanding packet flows in OpenStack Neutron
- Unrecoverable unfound objects in Ceph 0.67 and earlier
- Which OSD stores a specific RADOS object?
- An introduction to OpenStack (LinuxTag 2014)
- Automated Deployment of a HA OpenStack Cloud (OpenStack Summit Atlanta 2014)
- Automated Deployment of a Highly Available OpenStack Cloud (OpenStack Summit Paris 2014)
- Ceph Tutorial (LCA 2013)
- Ceph: The Storage Stack for OpenStack (OpenStack Israel 2013)
- Die eigene Cloud mit OpenStack Essex (German, LinuxTag 2012)
- Fencing (LCE 2011)
- GlusterFS in HA Clusters (LCEU 2012)
- GlusterFS und Ceph (German, CeBIT 2012)
- Hacking OpenStack for Padawans (OpenStack Summit Atlanta 2014)
- Hands On Trove (Percona Live 2014)
- Hands-On With Ceph (LCEU 2012)
- High Availability Update (OpenStack Summit Fall 2012)
- High Availability in OpenStack (CloudOpen 2012)
- High Availability in OpenStack (OpenStack Conference Spring 2012)
- Highly Available Cloud: Pacemaker integration with OpenStack (OSCON 2012)
- Mit OpenStack zur eigenen Cloud (German, CLT 2012)
- Mit OpenStack zur eigenen Cloud (German, OSDC 2012)
- More Reliable, More Resilient, More Redundant (OpenStack Summit April 2013)
- MySQL HA Deep Dive (MySQL Conference 2012)
- MySQL High Availability Deep Dive (PLUK 2012)
- MySQL High Availability Sprint (PLUK 2011)
- OpenStack & Ceph (Ceph Day Frankfurt 2014)
- OpenStack Essex im Praxistest (German, Linuxwochen Wien 2012)
- OpenStack High Availability Update (Grizzly and Havana)
- OpenStack Tour de Force (OSCON 2013)
- Roll Your Own Cloud (LCA 2011)
- Storage Replication in HPHA (LCA 2012)
- Zen of Pacemaker (LCA 2012)
- hastexo in 100 Seconds
- Technical documentation
- News releases
- Tesora announces a partnership with hastexo
- hastexo announces hastexo Academy
- Inktank & hastexo announce partnership on Ceph (German)
- Inktank & hastexo announce partnership on Ceph
- SkySQL, hastexo Form Highly Available Partnership
- The OpenStack DACH Day 2013 (German)
- hastexo Becomes OpenStack Corporate Sponsor, Expands OpenStack Training Portfolio
- hastexo, Cloudscaling announce training collaboration
- hastexo, GigaSpaces announce training partnership
- OpenStack DACH Day 2014: LinuxTag loves OpenStack!
- hastexo included in OpenStack Foundation Marketplace launch
- Hints and Kinks
- What we charge
- What others say
Please note: this information is provided on an as-is basis, without warranty of any kind, to the extent permitted by applicable law. Use at your own discretion.
OpenStack Tour de Force
Florian's tutorial at OSCON 2013. This page contains important information for tutorial attendees, and will be updated iteratively in the run-up to the conference.
Please note: If you are a tutorial attendee, you might want to subscribe to this page (scroll to the bottom and check Subscribe to → This page, then hit Update). That way, you will get a quick email any time this post is updated, or someone adds a comment. You must be logged in to subscribe.
All the important basic details for this tutorial are on the OSCON website. Please make sure you take a peek.
Expect the tutorial to be crowded. There were over 130 registrations two weeks prior to OSCON, and it's safe to assume that the number will still go up. You might want to be there early, too, to make sure you get your preferred seat.
The VirtualBox images are available via Ubuntu One. It would be great if you could bring these with you to the tutorial, preferably installed, all fired up and ready to go. There are two images:
- A Puppet master box, which also doubles as a pre-populated proxy so we don't hit the network with any package installations, openstack-puppet.ova (approx. 600M).
- One template OpenStack node machine, openstack.ova (approx. 300M).
(In addition, there's also an MD5SUMS file).
You'll need only one incarnation of the Puppet master, but three of the OpenStack nodes.
The OpenStack nodes also rely on the availability of a total of 3 host-only networks for internal communications. You'll need to configure them in VirtualBox.
To create the host-only networks, you can either open the VirtualBox Manager, go to File → Preferences and then the Network tab, and create vboxnet0, vboxnet1 and vboxnet2. Or you can run the following steps using the VBoxManage utility from a terminal:
$ VBoxManage hostonlyif create 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Interface 'vboxnet0' was successfully created $ VBoxManage hostonlyif ipconfig vboxnet0 \ --ip 192.168.122.1 --netmask 255.255.255.0 $ VBoxManage hostonlyif create 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Interface 'vboxnet1' was successfully created $ VBoxManage hostonlyif ipconfig vboxnet1 \ --ip 192.168.133.1 --netmask 255.255.255.0 $ VBoxManage hostonlyif create 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Interface 'vboxnet2' was successfully created $ VBoxManage hostonlyif ipconfig vboxnet2 \ --ip 192.168.144.1 --netmask 255.255.255.0
Then, when you import the appliances into your VirtualBox, do make sure that you reinitialize the machines' MAC addresses. Then,
- Boot up each OpenStack node
- Log in as
rootwith the password
./fixup-hostcharlie, respectively, on the three OpenStack nodes. Then reboot them and you're good to go. No changes are necessary to the Puppet node.
RAM on the machines is deliberately configured on the low side, such as to make the VMs run on laptops without copious amounts of memory. If your laptop does have a good amount of memory, turning RAM up to 2G for the nodes
bob won't hurt.
If you turn up early for the tutorial (around 1pm), you can still grab a USB key containing the VM images from Florian.
The tutorial is in room Portland 255, on the upper level next to the grand ballroom where Ignite and the OSCON keynotes take place.