Step 1, create a Ubuntu virtual machine from the physical host (which also runs Ubuntu in my case)
- Note that a total of 4 disks are required for a single node Ceph installation. In my case, /vmdisk0, /vmdisk1 and /vmdisk2 are three mount points from different physical disks.- According to the guide, 1TB is the minimum disk size allowed for Ceph installation, by adding sparse=true and --force options in the command over-committed virtual disk file will be created with warnings.(In fact /vmdisk1 and /vmdisk2 is backed by two 320GB hard drives only)
- Install Ubuntu as you would normally do, suggested to install OpenSSH server for easier installation at later stage.
sudo virt-install --name=ceph-single-node --vcpus=2 --cpu core2duo --ram=2048 \
--memballoon virtio --os-variant=ubuntu16.04 \
--boot hd,cdrom,menu=on --description "Guest created for Ceph S3 object storage" \
--disk path=/vmdisk0/ceph-single-node-d0.qcow2,device=disk,format=qcow2,bus=virtio,sparse=true,size=8 \
--disk path=/vmdisk0/ceph-single-node-d1.qcow2,device=disk,format=qcow2,bus=virtio,sparse=true,size=1024 \
--disk path=/vmdisk1/ceph-single-node-d2.qcow2,device=disk,format=qcow2,bus=virtio,sparse=true,size=1024 \
--disk path=/vmdisk2/ceph-single-node-d3.qcow2,device=disk,format=qcow2,bus=virtio,sparse=true,size=1024 --force \
--disk path=/share/sources/ubuntu.iso,device=cdrom,format=raw,bus=ide \
--network network=net_public,model=virtio \
--network network=net_private,model=virtio \
--graphics vnc,listen=0.0.0.0 --noautoconsole
Step 2, upgrade your guest with the latest patches. Remember to restart your guest after upgrade completes.
sudo apt update && sudo apt upgrade
Step 3, install the repository key and perform installation. You may want to use SSH to connect to your guest for easier copy and paste of below commands.
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb http://download.ceph.com/debian-jewel/ trusty main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt-get update && sudo apt-get install ceph-deploy
Step 4, create a user ceph-deploy for use onward.
sudo useradd -m -s /bin/bash ceph-deploy
sudo passwd ceph-deploy
- enter the password for the new user "ceph-deploy". Consider a more complex password because this user will have full sudoer rights.
echo "ceph-deploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-deploy
sudo chmod 0440 /etc/sudoers.d/ceph-deploy
sudo su - ceph-deploy
Step 5, below commands are run under the newly created user "ceph-deploy". Just hit "Enter" key when you are asked to enter the directory and passphrase of the new ssh key.
ssh-keygen
ssh-copy-id ceph-deploy@ceph-single-node
cd ~
mkdir my-cluster
cd my-cluster
Step 6, this is not included in the reference. I need to edit the host file before continuing (otherwise it will result in error of not being able to resolve the hostname of "ceph-single-node").
sudo vi /etc/hosts
- Comment out by adding "#" in front of the line, like this:
# 127.0.1.1 ceph-single-node
Step 7, continue with deploying the single node ceph.
ceph-deploy new ceph-single-node
Step 8, this is also different from the reference, the filename was changed. Edit the file below:
vi ceph.conf
- Add the following two lines. The first line told the system to keep one copy of the replica. (The reference suggested 2, but I think 1 is better for testing purpose)- The second line is required to allow Ceph to run on single node.
osd pool default size = 1
osd crush chooseleaf type = 0
Step 9, install the ceph binaries, this will take some time to download the required files. Create a monitor right after installing the binaries.
ceph-deploy install ceph-single-node
ceph-deploy mon create-initial
Step 10a, configure the three virtual disks for use with Ceph, note that all three disks are untouched until now and they should contain no partition tables.
ceph-deploy osd prepare ceph-single-node:vdb
ceph-deploy osd prepare ceph-single-node:vdc
ceph-deploy osd prepare ceph-single-node:vdd
Step 10b, activate the partitions.
ceph-deploy osd activate ceph-single-node:/dev/vdb1
ceph-deploy osd activate ceph-single-node:/dev/vdc1
ceph-deploy osd activate ceph-single-node:/dev/vdd1
Step 11, redistribute the config and keys and modify the access rights.
ceph-deploy admin ceph-single-node
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
Step 12, test the health of the Ceph deployment
ceph -s
- If the health reports error, try setting pg_num and pgp_num to higher number. I cannot repeat the error I faced in the first time. Maybe it's some typos for me but just keep it as notes.
# ceph osd pool set rbd pg_num 90
# ceph osd pool set rbd pgp_num 90
Step 13, install object storage gateway and CephFS.
ceph-deploy rgw create ceph-single-node
ceph-deploy mds create ceph-single-node
Step 14, create a user for S3 connectivity. Take note on the access_key and secret_key generated.
sudo radosgw-admin user create --uid="testuser" --display-name="First User"
- Install s3cmd and configure it, you will be asked for the access_key and secret_key to proceed.
sudo apt-get install s3cmd
s3cmd --configure
Step 15, configure the s3cmd config file to point to our server.
vi .s3config
- Replace the IP address with your ceph-single-node ones. If you want to access it from the Internet, just create a TCP NAT rule to your IP with TCP port 7480.
host_base = 192.168.1.11:7480
host_bucket = %(bucket)s.192.168.1.11:7480
Step 16, you are done with configuration. Now we will do some testing.
s3cmd mb s3://testdrive
echo "Hello S3 World" > hello.txt
s3cmd put hello.txt s3://testdrive
s3cmd ls s3://testdrive
Reference: http://palmerville.github.io/2016/04/30/single-node-ceph-install.html
No comments:
Post a Comment