Ceph Deploy

Ceph - Object Storage
 - Block Storage</p>
 - Filesystem

Pentru un setup ceph cluster avem nevoie de minim 4 noduri:

ceph-admin (pentru deploy), nod1(monitorizare), nod2 (osd), nod3(osd)

1.ntp si ssh trebuie sa ruleze pe toate nodurile ceph

si se editeaza(daca nu exista inregistrari dns) /etc/hosts file cu ip-urile si hostname-urile tuturor nodurilor

Se dazactiveaza(sau se permit porturile 6789/tcp, 6800:7300/tcp) firewall si selinux.

Cu visudo se schimba linia

Defaults requiretty

in 

Defaults:ceph !requiretty


2. se adauga ceph repo(firefly e un release anterior de ceph, giant ultimul) pe fiecare nod

vim /etc/yum.repos.d/ceph.repo


[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-firefly/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1

[ceph-source]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-firefly/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1

[ceph]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-firefly/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1


si se face update si install la ceph-deploy

yum update &amp;&amp; yum install ceph-deploy ntp ntpdate ntp-doc openssh-server yum-plugin-priorities

3. Se creaza userul necesar instalarii pe fiecare nod
(OBS. nu se face install de ceph cu root sau sudo)

useradd -d /home/ceph -m ceph
passwd ceph

Acordam drept de sudo pe toate nodurile din cluster:

echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
chmod 0440 /etc/sudoers.d/ceph

4. Ne logam cu userul ceph pe nodul ceph-admin si se genereaza cheie ssh:

ssh-keygen 

si apoi se copiaza cheia pe toate nodurile cu 
ssh-copy-id ceph@nod1 ... nod2 ... nod3

Se verifica daca merge ssh cu userul ceph pe toate nodurile clusterului:

Pe ceph-admin se modifica sau se creeaza fisierul ~/.ssh/config 


Host node1
   Hostname node1
   User ceph
Host node2
   Hostname node2
   User ceph
Host node3
   Hostname node3
   User ceph


5. Se creeaza clusterul cu utilizator ceph pe ceph-admin:

mkdir telekom-cluster; cd telekom-cluster

se ridica monitor node

ceph-deploy new node1

Daca avem doar 2 osd-uri, cum e si cazul nostru, in fisierul ceph.conf se adauga

osd pool default size = 2


Se instaleaza ceph:

ceph-deploy install ceph-admin nod1 nod2 nod3

Se creeaza cheile:

ceph-deploy mon create-initial

6. Pe osd-uri ce creeaza directoarele tinta (pot fi si disk-uri):



ssh nod2
sudo mkdir /var/local/osd0
exit

ssh nod3
sudo mkdir /var/local/osd1
exit



Se pregatesc si se activeaza pentru osd:

ceph-deploy osd prepare nod2:/var/local/osd0 nod3:/var/local/osd1

ceph-deploy osd activate nod2:/var/local/osd0 nod3:/var/local/osd1

7. Se copiaza cheile pe toate nodurile pentru a avea CLI pe fiecare nod:

ceph-deploy admin ceph-admin nod1 nod2 nod3

chmod +r /etc/ceph/ceph.client.admin.keyring

8. Daca instalarea e in regula la
ceph health
trebuie sa apara active + clean

La final avem o suma de verificari minore, creare de pool-uri si obiecte:

[ceph@ceph-admin cluster]$ ceph status
    cluster f43fcd51-c798-4c4e-93f8-a525ddc665de
     health HEALTH_OK
     monmap e1: 1 mons at {node1=192.168.255.240:6789/0}, election epoch 2, quorum 0 node1
     osdmap e9: 2 osds: 2 up, 2 in
      pgmap v14: 192 pgs, 3 pools, 0 bytes data, 0 objects
            12610 MB used, 4713 MB / 17324 MB avail
                 192 active+clean



[ceph@ceph-admin cluster]$ ceph osd lspools
0 data,1 metadata,2 rbd,

[ceph@ceph-admin cluster]$ ceph osd pool create pool-A 128
pool 'pool-A' created

[ceph@ceph-admin cluster]$ ceph osd lspools
0 data,1 metadata,2 rbd,3 pool-A,

[ceph@ceph-admin cluster]$ dd if=/dev/zero of=object-A bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.00864151 s, 1.2 GB/s

[ceph@ceph-admin cluster]$ dd if=/dev/zero of=object-B bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.00925061 s, 1.1 GB/s

[ceph@ceph-admin cluster]$  rados -p pool-A put object-A  object-A

[ceph@ceph-admin cluster]$ rados -p pool-A put object-B  object-B

[ceph@ceph-admin cluster]$ rados -p pool-A ls
object-A
object-B


[ceph@ceph-admin cluster]$ ceph status
    cluster f43fcd51-c798-4c4e-93f8-a525ddc665de
     health HEALTH_OK
     monmap e1: 1 mons at {node1=192.168.255.240:6789/0}, election epoch 2, quorum 0 node1
     osdmap e11: 2 osds: 2 up, 2 in
      pgmap v93: 320 pgs, 4 pools, 20480 kB data, 2 objects
            12654 MB used, 4669 MB / 17324 MB avail
                 320 active+clean