Skip to main content

KAFKA CLUSTER SETUP GUIDE

KAFKA CLUSTER SETUP


Step-1: Set password less communication in master and slave machine

1-check communication in both machine. [ping master.node.com  /   ping slave.node.com]
2- set fully qualified domanin name [/etc/host]
3- su root [master/slave machine]
4- change hostname  /etc/hostname file....

hostname -f
master.node.com

3-update password less ssh in master and slave.

check previous blog.
http://hadoop-edu.blogspot.com/2018/09/installation-of-apache-hadoop.html


Step-2: Extract Kafka and Zookeeper and Update bashrc file.

1-   /usr/lib/kafka/...   [tar kafka here]         [in both machine]
2-   /usr/lib/zoo/...       [tar zookeeper here] [in both machine]
      tar -xvfz  zookeeper-3.4.10.tar.gz  [master & slave]

nano ~/.bashrc

export ZOOKEEPER_HOME=/usr/lib/zoo/zookeeper-3.4.10
export KAFKA_HOME=/usr/lib/kafka/kafka_2.11-1.1.0
PATH=$PATH:$KAFKA_HOME/bin
PATH=$PATH:$ZOOKEEPER_HOME/bin

source ~/.bashrc


Step-3: Create zoo.cfg file from zoo_sample.cfg and update master,slave servers.

1-echo $KAFA_HOME
2-echo $ZOOKEEPER_HOME
3-cp  conf/zoo_sample.cfg zoo.cfg     [set in both master and slave machine]
4-nano zoo.cfg  <add your server>
create directory var/zookeeper
dataDir=/var/zookeeper       [should not be /tmp]

server.1=master.node.com:2888:3888
server.2=slave.node.com:2889:3889

Step-4: Create myid file inside /var/zookeeper directory.

cd /var/zookeeper/> touch myid       !---create myid file 
< delete all files inside /var/zookeeper >
echo '1' >> myid       <<<master node
echo '2' >> myid      <<<slave node
chmod 777   myid    <<<master node
chmod 777   myid    <<<slave node


Step-5: Update Kafka Server Properties
kafa/config/server.properties
master                    slave
brokerid=1         broker id=2

zookeeper.connect = master.node.com:2181,slave.node.com:2181  [master/slave]


Step-6: Start Zookeeper and check zookeeper status.
start zookeeper first then start apache  kafka server.

zkServer.sh start
zkServer.sh status

apt-get install clustershell

clush  -g  zk  -b  "/usr/lib/zoo/zookeeper-3.4.10/bin/zkServer.sh status" 
< view leaders and followers >

start zookeeper in all nodes in cluster.
zkServer.sh start.
zkServer.sh stop.
zkServer.sh status.

Step-7: Update brokerid in kafka server.

....update broker id =1  master node   <kafka/conf.../server.properties>
....update broker id =2  master node   <kafka/conf.../server.properties>
zookeeper.connect=master.node.com:2181,slave.node.com:2181  <master and slave machine >

Step-8: Start Kafka Server
starting kafka server:

kafka-server-start.sh  $KAFKA_HOME/config/server.properties

MASTER SLAVE COMMUNICATION:

Create Topic:
new terminal:  < master node>
root@master:/home/masternode#
echo "hello" | kafka-console-producer.sh --broker-list master.node.com:9092 --topic mytopic

or
kafka-console-producer.sh  --broker-list  master.node.com:9092  --topic mytopic
> hello kafka student welcome in techstack.

List Topics:
new terminal:  < slave   node>
kafka-topics.sh --list --zookeeper master.node.com:2181
mytopic

Get Message from consumer
kafka-console-consumer.sh --bootstrap-server master.node.com:9092 --topic mytopic  --from-beginning
>hello kafka student welcome in techstack.

Comments

Popular posts from this blog

Apache Spark and Apache Zeppelin Visualization Tool

Apache Spark and Apache Zeppelin Step-1: Installation and configuration of Apache Zeppelin https://zeppelin.apache.org/download.html Step-2: Extract Apache Zeppelin and move it to /usr/lib directory. sudo tar xvf   zeppelin-*-bin-all.tgz move   zepline   to   /usr/lib/directory   Step-3: Install Java development kit in ubuntu and set JAVA_HOME variable. echo $JAVA_HOME create     zepplin-env.sh   and zeppline-site.xml   from template files. open zepplin-env.sh                set          JAVA_HOME=        /path/                set          SPARK_HOME=     /path/ ...

Apache Hadoop cluster setup guide

APACHE HADOOP CLUSTER SETUP UBUNTU 16 64 bit Step 1: Install ubuntu os system for master and slave nodes.             1-install vmware workstation14.             https://www.vmware.com/in/products/workstation-pro/workstation-pro-evaluation.html             2- install Ubuntu 16os-64bit  for masternode using vmware             3- install Ubuntu 16os-64bit  for  slavenode  using vmware Step-2 : Update root password so that you can perform all admin level operations.                          sudo passwd root command to set new root password Step-3-Creating a User  from root user for Hadoop Eco System.             It is recommended to create a separate user for Hadoop to isolate Hadoop file system from          ...