Versions:
Step 1: Create two VMs (KVM within Ubuntu in this case)
Step 2: Install the two VMs with latest Proxmox VE builds (3.4 in this post)
Step 3: Configure host table in the two pve server:
# nano /etc/hosts
Step 4: Create cluster in one of the pve server:root@pve-01# pvecm create defaultcluster
Step 5: Join the second server in the newly created cluster (pve-01 below is the server host name and should be replaced by the host name which you run the command "pvecm create"root@pve-02# pvecm add pve-01
The configuration is fairly straight forward. However further configuration requires some tricks:
Case 1: NFS mount point requires "NO_ROOT_SQUASH" option which is considered as a security risk.
- From my own testing with a QNAP NAS unit, setting RW access for guest accounts and for group "everybody" is not required to enable read/write access to the NFS mount point. However the "NO_ROOT_SQUASH" option should be selected to make the NFS mount point able to deploy OpenVZ container.
Case 2: GlusterFS requires host table modification to learn the host name of all Gluster nodes.
- Even if you are using IP Address to mount the GlusterFS mount point, it is still required to add hostname/IP mapping in every PVE server to have mount the file system successfully.
No comments:
Post a Comment