GlusterFS bricks should be in a subfolder of a mountpoint

Written by - 1 comments

Published on - Listed in Linux


When I did my first GlusterFS setup (not that long ago) in February 2014, I documented the following steps:

Create new LVM LV (which will be the brick):

lvcreate -n brick1 -L 10G vgdata

Format the LV (I used ext3 back then):

mkfs.ext3 /dev/mapper/vgdata-brick1

Create local mountpoint for the brick LV:

mkdir /srv/glustermnt

Mount brick LV to the local mointpoint (and create fstab entry):

mount /dev/mapper/vgdata-brick1 /srv/glustermnt

Create Gluster volume:

gluster volume create myglustervol replica 2 transport tcp node1:/srv/glustermnt node2:/srv/glustermnt
volume create: myglustervol: success: please start the volume to access data

This was on a Debian Wheezy with glusterfs-server 3.4.1.

This seems to have changed now on a Ubuntu 14.04 LTS with glusterfs-server 3.4.2, when I tried to create a volume over three nodes:

gluster volume create myglustervol replica 3 transport tcp node1:/srv/glustermnt node2:/srv/glustermnt node3:/srv/glustermnt
volume create: backup: failed: The brick node1:/srv/glustermnt is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.

I came across a mailing list discussion (see this page for the archive) where this same error message was mentioned by the OP. The answer was, to my surprise, that it should have never been a direct mount point in the first place - although it worked:

The brick directory should ideally be a sub-directory of a mount point (and not a mount point directory itself) for ease of administration. We recently added code to warn about this

So I now created a subfolder within the mount point (on all the other peers, too) and relaunched the volume create command with the adapted path:

gluster volume create myglustervol replica 3 transport tcp node1:/srv/glustermnt/brick node2:/srv/glustermnt/brick node3:/srv/glustermnt/brick
volume create: myglustervol: success: please start the volume to access data

 Looks better. But I'm still wondering why it was working in February 2014 when the mailing list entry was from May 2013...

Update September 15th 2014:
In the GlusterFS mailing list, this topic came up again and I responded with the following use-case example which clearly shows why a sub folder of a mount point:

Imagine you have a LV you want to use for the gluster volume. Now you mount this LV to /mnt/gluster1. You do this on the other host(s), too and you create the gluster volume with /mnt/gluster1 as brick. By mistake you forget to add the mount entry to fstab so the next time you reboot server1, /mnt/gluster1 will be there (because it's the mountpoint) but the data is gone (because the LV is not mounted). I don't know how gluster would handle that but it's actually easy to try it out :)
So using a subfolder within the mountpoint makes sense, because that subfolder will not exist when the mount of the LV didn't happen.


Add a comment

Show form to leave a comment

Comments (newest first)

stuart from UK wrote on Apr 22nd, 2015:

I had the same problem from an original cluster setup using Gluster 3.2.5 using direct mount points, i even recall the documentation implied this was the correct method, it makes sense to have a directory for the reasons illustrated and documented by Redhat.

Please be aware that a single logical volume on a given server should be allocated only to one gluster volume , please see this article.


RSS feed

Blog Tags:

  AWS   Android   Ansible   Apache   Apple   Atlassian   BSD   Backup   Bash   Bluecoat   CMS   Chef   Cloud   Coding   Consul   Containers   CouchDB   DB   DNS   Database   Databases   Docker   ELK   Elasticsearch   Filebeat   FreeBSD   Galera   Git   GlusterFS   Grafana   Graphics   HAProxy   HTML   Hacks   Hardware   Icinga   Influx   Internet   Java   KVM   Kibana   Kodi   Kubernetes   LVM   LXC   Linux   Logstash   Mac   Macintosh   Mail   MariaDB   Minio   MongoDB   Monitoring   Multimedia   MySQL   NFS   Nagios   Network   Nginx   OSSEC   OTRS   Office   OpenSearch   PGSQL   PHP   Perl   Personal   PostgreSQL   Postgres   PowerDNS   Proxmox   Proxy   Python   Rancher   Rant   Redis   Roundcube   SSL   Samba   Seafile   Security   Shell   SmartOS   Solaris   Surveillance   Systemd   TLS   Tomcat   Ubuntu   Unix   VMWare   VMware   Varnish   Virtualization   Windows   Wireless   Wordpress   Wyse   ZFS   Zoneminder