Install Kafka Cluster and Zookeeper with High Availability

kafka multi broker

What is apache kafka ?

Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. https://kafka.apache.org/

What is apache zookeeper?

ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or another by distributed applications. https://zookeeper.apache.org/

Setup Zookeeper Cluster with High Availability

Install java on all node ha-zoo*

sudo apt update
apt install -y openjdk-11-jdk
setup dns local on all node ha-zoo*
vi /etc/hosts
10.20.20.51 ha-zoo1
10.20.20.52 ha-zoo2
10.20.20.53 ha-zoo3

  • download zookeeper on all node ha-zoo*
cd /opt
sudo wget https://downloads.apache.org/zookeeper/zookeeper-3.6.2/apache-zookeeper-3.6.2-bin.tar.gz
  • ekstract zookeeper package on all node ha-zoo*
sudo tar -xvf apache-zookeeper-3.6.2-bin.tar.gz
sudo mv apache-zookeeper-3.6.2 zookeeper
cd zookeeper
  • config multiple zookeeper on all node ha-zoo*
sudo vi conf/zoo.cfg
tickTime=2000
dataDir=/data/zookeeper
clientPort=2181
initLimit=10
syncLimit=5
server.1=ha-zoo1:2888:3888
server.2=ha-zoo2:2888:3888
server.3=ha-zoo3:2888:3888

set multiple zookeeper, setup each node

on node ha-zoo1, add the file
vi /data/zookeeper/myid
1
on node ha-zoo2, add the file and save
vi /data/zookeeper/myid
2
on node ha-zoo3 add the file and save
vi /data/zookeeper/myid
3
running zookeeper on all node ha-zoo*
java -cp lib/zookeeper-3.6.2.jar:lib/*:conf org.apache.zookeeper.server.quorum.QuorumPeerMain conf/zoo.cfg
testing on node ha-zoo3
cd /opt/zookeeper
bin/zkCli.sh -server ha-zoo1:2181
[zk: ha-zoo3:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: ha-zoo3:2181(CLOSED) 3] quit

well done, zookeeper cluster running well. And then we will setup kafka cluster.

Setup Kafka Cluster Multi Broker with High Availability

Install java on all node ha-kafka*
apt update
apt install -y openjdk-11-jdk
setup dns local on all node ha-kafka*
vi /etc/hosts
10.20.20.41 ha-kafka1
10.20.20.42 ha-kafka2
10.20.20.43 ha-kafka3
10.20.20.51 ha-zoo1
10.20.20.52 ha-zoo2
10.20.20.53 ha-zoo3
Create folder inside /opt and download kafka
mkdir /opt/kafka
curl https://downloads.apache.org/kafka/2.6.0/kafka_2.13-2.6.0.tgz -o /opt/kafka/kafka.tgz
extract kafka
cd /opt/kafka
tar xvfz kafka.tgz --strip 1
create directory for kafka data on all node
sudo mkdir -p /data/kafka/log
chown -R ubuntu:ubuntu /data/kafka/
setup zookeeper connect on server configuration, exec on all node kafka cluster, edit this file
vi bin/config/server.properties
log.dirs=/data/kafka/log
num.partitions=3
zookeeper.connect=ha-zoo1:2181,ha-zoo2:2181,ha-zoo3:2181
set unique broker id each node kafka cluster, exec on ha-kafka1
vi bin/config/server.properties
broker.id=0
exec on ha-kafka2
vi bin/config/server.properties
broker.id=1
exec on ha-kafka3
vi bin/config/server.properties
broker.id=2
create kafka as a service
vi /etc/systemd/system/kafka.service
[Unit]
Description=Kafka
Before=
After=network.target
[Service]
User=ubuntu
CHDIR= {{ data_dir }}
ExecStart=/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
Restart=on-abort
[Install]
WantedBy=multi-user.target
reload daemon
sudo systemctl daemon-reload
start and enable kafka service
sudo systemctl start kafka.service
sudo systemctl enable kafka.service
sudo systemctl status kafka.service

if no problem, continue to create a topic

test create topic
root@ha-kafka1:/opt/kafka# bin/kafka-topics.sh --create --bootstrap-server ha-kafka1:9092 ha-kafka2:9092 ha-kafka3:9092 --topic test-multibroker
Created topic test-multibroker.
See list the topic
root@ha-kafka1:/opt/kafka# bin/kafka-topics.sh --list --bootstrap-server ha-kafka1:9092 ha-kafka2:9092 ha-kafka3:9092 
test-multibroker
root@ha-kafka1:/opt/kafka# ls /data/kafka/log/
cleaner-offset-checkpoint log-start-offset-checkpoint meta.properties recovery-point-offset-checkpoint replication-offset-checkpoint test-multibroker-0 test-multibroker-2

Thanks.

Share:

Setup Multi Master Kubernetes Cluster with Kubeadm

kubernetes multi master

Requirements:

  • kubernetes version 1.15.11
  • haproxy
  • docker

Node list:

  • ha-balancer 10.10.10.100
  • ha-master1 10.10.10.10
  • ha-master2 10.10.10.11
  • ha-master3 10.10.10.12
  • ha-worker1 10.10.10.20
  • ha-worker2 10.10.10.21

On all node execute this command

sudo apt update; sudo apt autoremove -y

Install docker package on ha-master1, ha-master2, ha-master3, ha-node1, ha-node2

sudo apt install -y docker.io=18.09.7-0ubuntu1~18.04.4
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d

Restart docker

systemctl daemon-reload
systemctl restart docker

Install kubectl, kubelet & kubeadm on all node master, ha-master1, ha-master2, ha-master3

sudo apt install -y apt-transport-https; curl -s <https://packages.cloud.google.com/apt/doc/apt-key.gpg> | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb <http://apt.kubernetes.io/> kubernetes-xenial main
EOF
sudo apt update; sudo apt install -y kubelet=1.15.11-00 kubeadm=1.15.11-00 kubectl=1.15.11-00

hold kubelet, kubeadm and kubectl

sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl enable kubelet

disable swap on master node and worker node

sudo swapon -s
swappoff -a
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
.bridge.bridge-nf-call-ip6tables = 1
.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
sudo modprobe br_netfilter

Install and configure haproxy on node ha-balancer

sudo apt update; sudo apt install haproxy -y
sudo vim /etc/haproxy/haproxy.cfg
...
frontend kubernetes
bind 10.10.10.100:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server ha-master1 10.10.10.10:6443 check fall 3 rise 2
server ha-master2 10.10.10.11:6443 check fall 3 rise 2
server ha-master3 10.10.10.12:6443 check fall 3 rise 2
frontend api_server_kubernetes
bind 10.10.10.100:8080
option tcplog
mode tcp
default_backend kube_api_server_kubernetes
backend kube_api_server_kubernetes
mode tcp
balance roundrobin
option tcp-check
server ha-master1 10.10.10.10:8080 check fall 3 rise 2
server ha-master2 10.10.10.10:8080 check fall 3 rise 2
server ha-master3 10.10.10.10:8080 check fall 3 rise 2
...

verification haproxy configuration

haproxy -c -V -f /etc/haproxy/haproxy.cfg

restart haproxy service

sudo systemctl restart haproxy

Generate SSH Key on ha-master1 and then copy to other master node

sudo -i
ssh-keygen
cat /root/.ssh/id_rsa.pub

copy ssh key to other master

ssh-copy-id -i /root.ssh/id_rsa root@ha-master2
ssh-copy-id -i /root.ssh/id_rsa root@ha-master3

test ssh to ha-master2 and ha-master3

ssh 10.10.10.11
ssh 10.10.10.12

verification connection to ha-balancer

nc -v 10.10.10.100 6443
nc -v 10.10.10.100 8080

initialitation on ha-master1

vi config.yaml
...
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.15.11
controlPlaneEndpoint: "10.10.10.100:6443"
...
kubeadm init --config=config.yaml --upload-certs

the results as below

You can now join any number of the control-plane node running the following command on each as root:  kubeadm join 10.10.10.100:6443 --token 71qkw2.ardnuukvwlvhugbt \\
--discovery-token-ca-cert-hash sha256:a8fad41061a6fb20207ebc3fabb5da65cf5dc397ef97c39ce6dc8f62863e5242 \\
--control-plane --certificate-key 6cd223990b20aefad2c394f3217ef9cc10c8625d33f3a8b91bf7da8cad5db74a
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.10.10.100:6443 --token 71qkw2.ardnuukvwlvhugbt \\
--discovery-token-ca-cert-hash sha256:a8fad41061a6fb20207ebc3fabb5da65cf5dc397ef97c39ce6dc8f62863e5242
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Deploy network CNI, this tutorial we willl use weave network

kubectl apply -f "<https://cloud.weave.works/k8s/net?k8s-version=$>(kubectl version | base64 | tr -d '\\n')"

on ha-master2 and ha-master3 join cluster to ha-master1

kubeadm join 10.10.10.100:6443 --token 71qkw2.ardnuukvwlvhugbt \\
--discovery-token-ca-cert-hash sha256:a8fad41061a6fb20207ebc3fabb5da65cf5dc397ef97c39ce6dc8f62863e5242 \\
--control-plane --certificate-key 6cd223990b20aefad2c394f3217ef9cc10c8625d33f3a8b91bf7da8cad5db74a

on ha-worker1 and ha-worker2 join cluster as worker node

kubeadm join 10.10.10.100:6443 --token 71qkw2.ardnuukvwlvhugbt \\
--discovery-token-ca-cert-hash sha256:a8fad41061a6fb20207ebc3fabb5da65cf5dc397ef97c39ce6dc8f62863e5242

verification nodes ready

kubectl get nodes
NAME STATUS ROLES AGE VERSION
ha-master1 Ready master 5d3h v1.15.11
ha-master2 Ready master 5d3h v1.15.11
ha-master3 Ready master 5d3h v1.15.11
ha-worker1 Ready <none> 5d3h v1.15.11
ha-worker2 Ready <none> 5d3h v1.15.11

verification all pods running

kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-ffhz9 1/1 Running 0 5d3h
coredns-66bff467f8-w2lcw 1/1 Running 0 5d3h
etcd-ha-master1 1/1 Running 0 5d3h
etcd-ha-master2 1/1 Running 0 5d3h
etcd-ha-master3 1/1 Running 0 5d3h
kube-apiserver-ha-master1 1/1 Running 0 5d3h
kube-apiserver-ha-master2 1/1 Running 0 5d3h
kube-apiserver-ha-master3 1/1 Running 0 5d3h
kube-controller-manager-ha-master1 1/1 Running 1 5d3h
kube-controller-manager-ha-master2 1/1 Running 0 5d3h
kube-controller-manager-ha-master3 1/1 Running 1 5d3h
kube-proxy-245hd 1/1 Running 0 5d3h
kube-proxy-4ckq2 1/1 Running 0 5d3h
kube-proxy-m62hj 1/1 Running 0 5d3h
kube-proxy-rpl5t 1/1 Running 0 5d3h
kube-scheduler-ha-master1 1/1 Running 2 5d3h
kube-scheduler-ha-master2 1/1 Running 0 5d3h
kube-scheduler-ha-master3 1/1 Running 0 5d3h
weave-net-4lkbs 2/2 Running 2 5d3h
weave-net-526gt 2/2 Running 2 5d3h
weave-net-bxvkk 2/2 Running 0 5d3h
weave-net-ts2m2 2/2 Running 0 5d3h
weave-net-bgsw4 2/2 Running 0 5d3h

check cluster info

# kubectl cluster-info
Kubernetes master is running at <https://10.10.10.100:6443>
KubeDNS is running at <https://10.10.10.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy>
root@ha-master1:~# kubectl get endpoints kube-scheduler -n kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"ha-master2_327fadbe-ec94-43bb-8076-9a1aafa57cd3","leaseDurationSeconds":15,"acquireTime":"2020-10-04T06:25:34Z","renewTime":"2020-10-04T11:06:30Z","leaderTransitions":2}'
creationTimestamp: "2020-09-29T07:05:42Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:control-plane.alpha.kubernetes.io/leader: {}
manager: kube-scheduler
operation: Update
time: "2020-10-04T11:06:30Z"
name: kube-scheduler
namespace: kube-system
resourceVersion: "1380211"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
uid: 5c15a8ad-a5fa-4d17-8416-0c97aff0cfe9

create example pod

root@ha-master1:~# kubectl run nginx --image=nginx
pod/nginx created
root@ha-master1:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 18s

Thanks.

Share:

Introduction ELK Stack - to Get Logs

img source : https://unsplash.com/

Requirements for this tutorial:

  • node3 -> ubuntu 18.4 -> server
  • node4 -> ubuntu 18.04 -> client
  • node5 -> centos 7 -> client

Let’s Go:

Excecution on node3

1. Update

# apt -y update

2. Install OpenJDK

# sudo apt -y install openjdk-8-jdk
# java -version

3. Install Elasticsearch

# wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
# apt -y install apt-transport-https
# echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-7.x.list
# apt -y update && apt -y install elasticsearch

4. Configuration Elasticsearch

# cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.original
# vi /etc/elasticsearch/elasticsearch.yml

edit line 55 to uncomment

network.host: localhost

5. Activate elasticsearch service

systemctl daemon-reload
systemctl enable elasticsearch
systemctl start elasticsearch
systemctl status elasticsearch

6. Test Elasticsearch

root@node3:~# netstat -tulpn                                                       
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID
/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 206
58/nginx: master
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 843
3/systemd-resolv
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 202

Curl elasticsearch

root@node3:~# curl -XGET 'localhost:9200/?pretty'
{
"name" : "node3",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "PPNElJoQT7mo8LP9hOkdBA",
"version" : {
"number" : "7.5.2",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "8bec50e1e0ad29dad5653712cf3bb580cd1afcdf",
"build_date" : "2020-01-15T12:11:52.313576Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

Kibana

1. Install Kibana

apt -y install kibana

2. Configuratation & integration kibana with elasticsearch

cp /etc/kibana/kibana.yml /etc/kibana/kibana.yml.original
vi /etc/kibana/kibana.yml

edit line 7 to uncomment

server.host: "localhost"

3. Activate kibana service

systemctl enable kibana
systemctl start kibana
systemctl status kibana

4. Install & configuration nginx as a reverse proxy

  • install nginx
    apt -y install nginx apache2-utils
  • configuration nginx

      # cp /etc/nginx/sites-available/default /etc/nginx/sites-available/default.original
      # vi /etc/nginx/sites-available/default

edit file /etc/nginx/sites-available/default, to be

server {
listen 80;

server_name _;

auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.kibana;

location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
} 
  • Create user & password to login dashboard kibana
    # htpasswd -c /etc/nginx/htpasswd.kibana [username]
  • Activation nginx service
    systemctl enable nginx
    systemctl restart nginx
    systemctl status nginx
    netstat -tupln

5. Access kibana dashboard

http://IP_node3

LOGSTASH

Excecution on node3

1.  Install Logstash

apt -y install logstash

2. Configuration Logstash

vi /etc/logstash/conf.d/input-filebeat.conf
Create input to elasticsearch
input {
beats {
port => 5044
}
}
Create output to elasticsearch
vi /etc/logstash/conf.d/output-elasticsearch.confoutput {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[fields][log_name]}_%{[agent][hostname]}_%{+YYYY.MM}"
}
}

3. Activate logstash service

systemctl enable logstash
systemctl start logstash
systemctl status logstash
netstat -tupln

FILEBEAT

Excecution on node4 & node5

1. Install Filebeat on node4
# apt -y update

# wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
# apt -y install apt-transport-https
# echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-7.x.list
# apt -y update && apt -y install filebeat

# systemctl enable filebeat
# systemctl status filebeat

2. Install Filebeat on node5

# yum -y update
# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
# vi /etc/yum.repos.d/elastic.repo

...
[elastic-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
...

# yum -y install filebeat

# systemctl enable filebeat
# systemctl start filebeat
# systemctl status filebeat

Send log to logstash

Excecution on node3

Configuration Logstash

vi /etc/logstash/conf.d/filter-syslog.conf

...
filter {
if [fields][log_name] == "syslog" {
mutate {
add_tag => [ "syslog" ]
}
}
}
Restart logstash service
systemctl restart logstash
systemctl status logstash

Execution on node4 & node5

# mv /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.original

Config filebeat on node4

# vi /etc/filebeat/filebeat.yml
...
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/syslog
fields:
log_name: syslog

output.logstash:
hosts: ["IP_internal_node3:5044"]
...

Config filebeat on node5

# vi /etc/filebeat/filebeat.yml
...
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/messages
fields:
log_name: syslog

output.logstash:
hosts: ["IP_internal_VM_node3:5044"]
...
Activate filebeat service
# systemctl restart filebeat
# systemctl status filebeat
Excecution on node3
root@node3:~# curl http://localhost:9200/_cat/indices?v
health status index uuid pri rep docs.co
unt docs.deleted store.size pri.store.size
green open .kibana_task_manager_1 M7mX7JkxRhqbRSFaZCZZ6w 1 0
2 1 43.5kb 43.5kb
green open .apm-agent-configuration _zg4Oj8OT-mPis3Xmaf5lw 1 0
0 0 283b 283b
yellow open syslog_node4_2020.01 zBnZ3VmORRyAVS37ozsC9A 1 1
187 0 194.1kb 194.1kb
green open .kibana_1 QOG6VQDFTzK0HXjeJQKRZQ 1 0
7 0 40.4kb 40.4kb
yellow open syslog_node5.novalocal_2020.01 QXjp-0GVTDSrOFKuFOY8Ig 1 1
856 0 376.1kb 376.1kb

Dashboard Kibana

 

 

Search Log

Thanks.

Share:

Monitoring Prometheus dengan Grafana

Depan Stasiun MRT
 

Kali ini kita akan membahas mengenai monitoring prometheus dengan visualisasi grafana.

Apa itu prometheus ?

Prometheus adalah open source, sistem monitoring berbasis metrics. Prometheus mudah di gunakan serta memiliki model data yang powerful dan bahasa query yang dapat menganalisa aplikasi dan infrastruktur yang kita miliki.

Dengan format text yang sederhana membuatnya lebih mudah untuk mengekspos metrik ke prometheus.

 

Apa itu node exporter ?

Eksporter adalah perangkat lunak yang di gunakan tepat di samping aplikasi yang ingin diperoleh metriknya. Eksporter menerima permintaan dari Prometheus, mengumpulkan data yang diperlukan dari aplikasi, mengubahnya menjadi format yang benar, dan kemudian mengembalikannya sebagai respons terhadap Prometheus.

Apa itu Grafana ?

Grafana adalah alat yang populer untuk membuat dashboard untuk berbagai sistem pemantauan dan non monitor, termasuk Graphite, InfluxDB, Elasticsearch, dan PostgreSQL. Ini adalah salah satu tools yang dapat digunakan untuk membuat dashboard saat menggunakan Prometheus.

Sekarang kita akan menginstall node exporter, prometheus dan grafana.

Kebutuhan :

1. Node monitoring: node-monitoring (ip: 10.67.67.30, OS: Centos 7)

2. Node container: node-container (ip: 10.67.67.31, OS: Centos 7)

Lakukan pada node container

Jika menggunakan firewall, buka port 9100 terlebih dahulu

# firewall-cmd --zone=public --permanent --add-port=9100/tcp
# firewall-cmd --reload
# cd /opt
# wget https://github.com/prometheus/node_exporter/releases/download/v0.18.1/node_exporter-0.18.1.linux-amd64.tar.gz
# tar xvfz node_exporter-0.18.1.linux-amd64.tar.gz
# ./node_exporter --help
# ./node_exporter
...
INFO[0000] - netstat source="node_exporter.go:104"
INFO[0000] - nfs source="node_exporter.go:104"
INFO[0000] - nfsd source="node_exporter.go:104"
INFO[0000] - pressure source="node_exporter.go:104"
INFO[0000] - sockstat source="node_exporter.go:104"
INFO[0000] - stat source="node_exporter.go:104"
INFO[0000] - textfile source="node_exporter.go:104"
INFO[0000] - time source="node_exporter.go:104"
INFO[0000] - timex source="node_exporter.go:104"
INFO[0000] - uname source="node_exporter.go:104"
INFO[0000] - vmstat source="node_exporter.go:104"
INFO[0000] - xfs source="node_exporter.go:104"
INFO[0000] - zfs source="node_exporter.go:104"
INFO[0000] Listening on :9100 source="node_exporter.go:170"

Akses browser http://10.67.67.31:9100/metrics :

metrik

Membuat node exporter sebagai service

# vi /etc/systemd/system/node_exporter.service

[Unit]
Description=Node Exporter

[Service]
User=root
ExecStart=/opt/node_exporter-0.18.1.linux-amd64/node_exporter

[Install]
WantedBy=default.target

Menjalankan servis node exporter

# systemctl daemon-reload
# systemctl enable node_exporter.service
# systemctl start node_exporter.service
# systemctl status node_exporter.service
# journalctl -u node_exporter

Instalasi Prometheus

Lakukan di node-monitoring

jika menggunakan firewall buka port 9090 terlebih dahulu

# firewall-cmd --zone=public --permanent --add-port=9090/tcp
# firewall-cmd --reload
# cd /opt
# wget https://github.com/prometheus/prometheus/releases/download/v2.10.0/prometheus-2.10.0.linux-amd64.tar.gz
# tar xvfz prometheus-2.10.0.linux-amd64.tar.gz
# cd prometheus-2.10.0.linux-amd64
# vi config.yml

global:
scrape_interval: 15s
evaluation_interval: 15s

scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['10.67.67.30:9090']
- job_name: 'node'
static_configs:
- targets: ['10.67.67.31:9100']

Cek konfigurasi prometheus

# ./promtool check config config.yml
# ./prometheus --web.listen-address 10.X0.X0.21:9090 --config.file /opt/prometheus-2.10.0.linux-amd64/config.yml

Membuat prometheus sebagai servis

# vi /etc/systemd/system/prometheus_server.service

[Unit]
Description=Prometheus Server

[Service]
User=root
ExecStart=/opt/prometheus-2.10.0.linux-amd64/prometheus --web.listen-address 10.X0.X0.21:9090 --config.file /opt/prometheus-2.10.0.linux-amd64/config.yml

[Install]
WantedBy=default.target

Jalankan prometheus

# systemctl daemon-reload
# systemctl enable prometheus_server.service
# systemctl start prometheus_server.service
# systemctl status prometheus_server.service
# journalctl -u prometheus_server
prometheus target

Install Grafana di node-monitoring

Jika menggunakan firewall buka port 3000

# firewall-cmd --zone=public --permanent --add-port=3000/tcp
# firewall-cmd --reload
# cd /opt
# wget https://dl.grafana.com/oss/release/grafana-6.2.5.linux-amd64.tar.gz
# tar -zxvf grafana-6.2.5.linux-amd64.tar.gz
# cd grafana-6.2.5
# ./bin/grafana-server -homepath /opt/grafana-6.2.5 web

Membuat grafana sebagai servis

# vi /etc/systemd/system/grafana.service

[Unit]
Description=Grafana

[Service]
User=root
ExecStart=/opt/grafana-6.2.5/bin/grafana-server -homepath /opt/grafana-6.2.5/ web

[Install]
WantedBy=default.target

Jalankan grafana:

# systemctl daemon-reload
# systemctl enable grafana.service
# systemctl start grafana.service
# systemctl status grafana.service
# journalctl -u grafana

Akses web browser http://10.67.67.30:3000

Credential grafana default:

username : admin
password : admin 
akses dashboard grafana

Menambahkan Data Source:

Masuk ke menu Configuration > Data Source > Add data source

Type > Prometheus

Contoh:

melihat uptime pada node

Sekian dan terimakasih.

 

 

 

 

 

 

 







Share:

Introduction to Docker Compose

source: docker
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the list of features.
Compose works in all environments: production, staging, development, testing, as well as CI workflows. You can learn more about each case in Common Use Cases.

Using Compose is basically a three-step process:

  • Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
  • Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
  • Run docker-compose up and Compose starts and runs your entire app. source : https://docs.docker.com/compose/

Install compose

sudo curl -L https://github.com/docker/compose/releases/download/1.20.1/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose

Set permission executable

sudo chmod +x /usr/local/bin/docker-compose

Check docker-compose version

sudo docker-compose — version

Compose and Wordpress

create directory my_wordpress and enter the directory

mkdir /lab/my_wordpress
cd /lab/my_wordpress

create docker-compose.yml file

version: '3.2'
services:
db:
image: mysql:5.7
volumes:
- dbdata:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: [username]
MYSQL_PASSWORD: [password]
    wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: [username]
WORDPRESS_DB_PASSWORD: [password]
volumes:
dbdata:

run compose

sudo docker-compose up -d

View container

sudo docker container ls

Access Wordpress from browser

Access wordpress on web browser

Thanks

Reference:

https://docs.docker.com/compose/

Share: