0

I have an application that installs itself into the /opt/my_app/ directory. Now I'd like to setup two servers in a cluster (active - passive) and sync the whole directory with DRBD. Now from what I understand DRBD requires a block device. So I would add a new virtual disk (both are ESX VM's) create a partition, next a physical volume, volume group and a logical volume. But the question I have is it technically possible to put /opt/my_app/ on the DRBD device and sync it between two nodes?

EDIT:

[root@server2 otrs]# pcs config Cluster Name: otrs_cluster Corosync Nodes: server1 server2 Pacemaker Nodes: server1 server2 Resources: Group: OTRS Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: cidr_netmask=8 ip=10.0.0.60 Operations: monitor interval=20s (ClusterIP-monitor-interval-20s) start interval=0s timeout=20s (ClusterIP-start-interval-0s) stop interval=0s timeout=20s (ClusterIP-stop-interval-0s) Resource: otrs_file_system (class=ocf provider=heartbeat type=Filesystem) Attributes: device=/dev/drbd0 directory=/opt/otrs/ fstype=ext4 Operations: monitor interval=20 timeout=40 (otrs_file_system-monitor-interval-20) start interval=0s timeout=60 (otrs_file_system-start-interval-0s) stop interval=0s timeout=60 (otrs_file_system-stop-interval-0s) Master: otrs_data_clone Meta Attrs: master-node-max=1 clone-max=2 notify=true master-max=1 clone-node-max=1 Resource: otrs_data (class=ocf provider=linbit type=drbd) Attributes: drbd_resource=otrs Operations: demote interval=0s timeout=90 (otrs_data-demote-interval-0s) monitor interval=30s (otrs_data-monitor-interval-30s) promote interval=0s timeout=90 (otrs_data-promote-interval-0s) start interval=0s timeout=240 (otrs_data-start-interval-0s) stop interval=0s timeout=100 (otrs_data-stop-interval-0s) Stonith Devices: Fencing Levels: Location Constraints: Resource: ClusterIP Enabled on: server1 (score:INFINITY) (role: Started) (id:cli-prefer-ClusterIP) Ordering Constraints: Colocation Constraints: Ticket Constraints: Alerts: No alerts defined Resources Defaults: No defaults set Operations Defaults: No defaults set Cluster Properties: cluster-infrastructure: corosync cluster-name: otrs_cluster dc-version: 1.1.16-12.el7_4.8-94ff4df have-watchdog: false last-lrm-refresh: 1525108871 stonith-enabled: false Quorum: Options: [root@server2 otrs]# [root@server2 otrs]# pcs status Cluster name: otrs_cluster Stack: corosync Current DC: server1 (version 1.1.16-12.el7_4.8-94ff4df) - partition with quorum Last updated: Mon Apr 30 14:11:54 2018 Last change: Mon Apr 30 13:27:47 2018 by root via crm_resource on server2 2 nodes configured 4 resources configured Online: [ server1 server2 ] Full list of resources: Resource Group: OTRS ClusterIP (ocf::heartbeat:IPaddr2): Started server2 otrs_file_system (ocf::heartbeat:Filesystem): Started server2 Master/Slave Set: otrs_data_clone [otrs_data] Masters: [ server2 ] Slaves: [ server1 ] Failed Actions: * otrs_file_system_start_0 on server1 'unknown error' (1): call=78, status=complete, exitreason='Couldn't mount filesystem /dev/drbd0 on /opt/otrs', last-rc-change='Mon Apr 30 13:21:13 2018', queued=0ms, exec=151ms Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled [root@server2 otrs]# 

1 Answer 1

1

It's certainly possible.

After you've added the block device and created the LVM to back the DRBD device, you would configure and initialize the DRBD device (drbdadm create-md <res> and drbdadm up <res>.

Promote one node to Primary (NOTE: you only need to force primary the first time you're promoting a device since you have Inconsistent/Inconsistent disk states): drbdadm primary <res> --force

Then you can put a filesystem on the device and mount it anywhere on the system, including /opt/my_app, just like you would with an ordinary block device.

If there is existing data in /opt/my_app/ that you need to move to the DRBD device, you could mount the device somewhere else, move/copy the data from /opt/my_app/ to the mount point, then remount the DRBD device on /opt/myapp, or you could use symlinks to point /opt/my_app at the DRBD device's mount point.

Updated answer after EDIT:

You need to add colocation and ordering constraints to your cluster configuration to tell the OTRS resource group to only run on the DRBD Master and to only start after the DRBD Master has been promoted.

These commands should add those constraints:

# pcs constraint colocation add OTRS with otrs_data_clone INFINITY with-rsc-role=Master # pcs constraint order promote otrs_data_clone then start OTRS 
6
  • ok, I think I figured it out but after integrating drbd with pacemaker I'm facing issues, it appears that pacemaker tries to mount the file system on both nodes (please see edit in my question). Thank you in advance. Commented May 1, 2018 at 17:53
  • I edited my answer. Hope that helps Commented May 2, 2018 at 16:36
  • Thank you Matt. Wouldn't I achieve the same configuring the OTRS group ? Isn't it the idea of the group that the resources run on the same node ? And the order in which pacemaker starts the resource isn't controlled by the resource order ? This is at least what I deduct. Commented May 6, 2018 at 19:11
  • You need the constraints to tell the OTRS group to start on the DRBD master. Commented May 7, 2018 at 3:10
  • Hello Matt. Ok it looks like that it works now although I think that # pcs constraint colocation add otrs_data_clone INFINITY with-rsc-role=Master was missing one more resource, is it possible? It worked with the following pcs constraint colocation add otrs_file_system otrs_data_clone INFINITY with-rsc-role=Master Commented May 9, 2018 at 13:56

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.