<p>Kraken Install</p>
<p>Kraken is a Free ceph dashboard for monitoring and statistics.</p>
<pre>1. For the installation of Kraken web guides that ensure a simple monitoring of the cluster, proceed as follows:
yum install git django python-pip screen libxml2-devel libxml++-devel python-devel gcc libxslt-devel
pip install requests
pip install requests --upgrade
pip install django
useradd kraken
cd /home/kraken/
Se cloneaza krakendash local
git clone https://github.com/krakendash/krakendash
cp krakendash/contrib/*.sh .
cd krakendash/
pip install -r requirements.txt
cd /home/kraken/
Se lanseaza api ceph si django
./api.sh
./django.sh
Aplicatia asculta pe port tcp 8000
Se poate vedea la http://192.168.255.239:8000/ pentru clusterul de POC.
2. Adaugare noduri osd in clusterul ceph
ceph-deploy install node6 node7
ceph-deploy disk list node6
ceph-deploy disk zap node6:/dev/xvdb
ceph-deploy osd prepare node6:/dev/xvdb
Apare pe nod deja montat:
[root@node6 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos_centos7-root 8.5G 1.2G 7.4G 14% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.7G 0 3.7G 0% /dev/shm
tmpfs 3.7G 8.4M 3.7G 1% /run
tmpfs 3.7G 0 3.7G 0% /sys/fs/cgroup
/dev/xvda1 497M 161M 337M 33% /boot
/dev/xvdb1 95G 1.4G 94G 2% /var/lib/ceph/osd/ceph-7
- la fel si pentru node7:
[root@node7 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos_centos7-root 8.5G 1.2G 7.4G 14% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.7G 0 3.7G 0% /dev/shm
tmpfs 3.7G 8.3M 3.7G 1% /run
tmpfs 3.7G 0 3.7G 0% /sys/fs/cgroup
/dev/xvda1 497M 161M 337M 33% /boot
[root@node7 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos_centos7-root 8.5G 1.2G 7.4G 14% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.7G 0 3.7G 0% /dev/shm
tmpfs 3.7G 8.4M 3.7G 1% /run
tmpfs 3.7G 0 3.7G 0% /sys/fs/cgroup
/dev/xvda1 497M 161M 337M 33% /boot
/dev/xvdb1 95G 2.1G 93G 3% /var/lib/ceph/osd/ceph-8
[root@node7 ~]#
ceph-deploy osd activate node6:/dev/xvdb
ceph-deploy osd create node6:/dev/xvdb
node 7 de asemeni si vor aparea in tree-ul ceph
[ceph@ceph-admin cluster]$ ceph osd tree
# id weight type name up/down reweight
-1 0.21 root default
-2 0.009995 host node2
0 0.009995 osd.0 up 1
5 0 osd.5 up 1
-3 0.009995 host node3
1 0.009995 osd.1 up 1
-4 0.009995 host node1
2 0.009995 osd.2 up 1
6 0 osd.6 up 1
-5 0 host node4
3 0 osd.3 down 0
-6 0 host node5
4 0 osd.4 down 0
-7 0.09 host node6
7 0.09 osd.7 up 1
-8 0.09 host node7
8 0.09 osd.8 up 1