这一节将在《Dockerfile完成Hadoop2.6的伪分布式搭建》的基础上搭建一个完全分布式的Hadoop集群。

1. 搭建集群中需要用到的文件

 

[root@centos-docker hadoop-cluster]# ll
total 340648
# 用自动化构建集群的脚本
-rwxr-xr-x. 1 root root      2518 Aug 13 01:20 build-cluster.sh
# 使用scp 来下载的文件的脚本
-rwxr-xr-x. 1 root root       314 Aug 12 19:31 download.sh
# 使用scp 来上传文件的脚本
-rwxr-xr-x. 1 root root       313 Aug 12 19:02 upload.sh
# 在集群构建完成后,每一次开启和关闭Docker集群的脚本
-rwxr-xr-x. 1 root root       203 Aug 13 00:57 cluster.sh
# 用来构建镜像的Dockerfile
-rw-r--r--. 1 root root      2810 Aug 13 00:30 Dockerfile
# hadoop软件
-rwxr-x---. 1 root root 195257604 Aug 12 23:18 hadoop-2.6.0.tar.gz
# java JDK
-rwxr-x---. 1 root root 153512879 Aug 12 23:18 jdk-7u79-linux-x64.tar.gz
# hadoop的配置文件
-rw-r--r--. 1 root root       387 Aug 12 20:57 yarn-site.xml
-rw-r--r--. 1 root root       643 Aug 11 23:47 hdfs-site.xml
-rw-r--r--. 1 root root       400 Aug 11 23:47 core-site.xml
-rw-r--r--. 1 root root       138 Aug 11 23:27 mapred-site.xml
-rw-r--r--. 1 root root        21 Aug 13 01:20 slaves

 

2. Hadoop的配置文件内容:

site.xml文件:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>file:/data/hadoop/tmp</value>
        </property>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://master:9000</value>
        </property>
</configuration>

site.xml文件:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
        <property>
                <name>dfs.replication</name>
                <value>3</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/data/hadoop/dfs/name</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:/data/hadoop/dfs/data</value>
        </property>
    <property>
            <name>dfs.permissions</name>
            <value>false</value>
        </property>
</configuration>

site.xml文件:

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

site.xml文件:

<configuration>
        <property>
          <name>yarn.nodemanager.aux-services</name>
          <value>mapreduce_shuffle</value>
        </property>
        <property> 
          <name>yarn.log-aggregation-enable</name> 
          <value>true</value> 
        </property>
    <property>
      <name>yarn.resourcemanager.hostname</name>
      <value>master</value>
    </property>
</configuration>

3. Dockerfile内容解释

# build a new hadoop image with basic  centos 
FROM centos
# who is the author  
MAINTAINER amei

####################Configurate JDK################################
# 安装ssh(集群必要),iproute(内涵各种网络命令),which(没有这个命令,在执行hadoop的一些命令的时候就会报错)
# 同时建立存放JDK的目录
RUN yum -y install openssh-server openssh-clients  iproute  which  &&  mkdir /usr/local/java

# 通过ADD指令将JDK复制到,ADD这个指令会将压缩包自动解压
ADD jdk-7u79-linux-x64.tar.gz /usr/local/java/

###################Configurate SSH#################################
# 生成必要的host key文件,否则,/usr/sbin/sshd将无法启动
RUN ssh-keygen -q -t rsa -b 2048 -f /etc/ssh/ssh_host_rsa_key -N '' &&  ssh-keygen -q -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key -N '' &&  ssh-keygen -q -t dsa -f /etc/ssh/ssh_host_ed25519_key  -N '' 

# 配置无密码登陆到本机,首相生成公私钥对,然后建立authorized_keys文件
RUN ssh-keygen -f /root/.ssh/id_rsa -N '' &&  cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys


###################Configurate Hadoop##############################
# 将hadoop软件包copy到镜像中,同时会自动解压
ADD hadoop-2.6.0.tar.gz /usr/local/

# 建立hadoop文件夹的软连接
RUN ln -s /usr/local/hadoop-2.6.0 /usr/local/hadoop

# 复制编辑好的配置文件到镜像中去,会直接覆盖镜像中原有的文件
COPY core-site.xml /usr/local/hadoop/etc/hadoop/
COPY hdfs-site.xml /usr/local/hadoop/etc/hadoop/
COPY mapred-site.xml /usr/local/hadoop/etc/hadoop/
COPY yarn-site.xml /usr/local/hadoop/etc/hadoop/
COPY slaves /usr/local/hadoop/etc/hadoop/

# 重新设置hadoop-env.sh中的JAVA_HOME变量,否则将无法正常启动集群
# 配置ssh_config文件中: StrictHostKeyChecking no, 可以消除ssh,scp等访问时询问yes/no。
# 配置sshd_config 文件中, UseDNS no , UseDNS 的默认值为 yes。 配置为no之后可以加速ssh,scp链接速度。
RUN sed -i "s?JAVA_HOME=\${JAVA_HOME}?JAVA_HOME=/usr/local/java/jdk?g" /usr/local/hadoop/etc/hadoop/hadoop-env.sh && \
 sed -i "s?#\s\+StrictHostKeyChecking\s\+ask?StrictHostKeyChecking no?g" /etc/ssh/ssh_config  && \
 sed -i "s?#UseDNS yes?UseDNS no?g" /etc/ssh/sshd_config 

################### Integration configuration #######################
# 这里配置镜像的环境变量,但是这样设置的环境变量只能在运行时使用/bin/bash时才会生效。当用ssh登录到容器后,这些变量将失效(坑爹的玩意)
ENV JAVA_HOME /usr/local/java/jdk
ENV JRE_HOME ${JAVA_HOME}/jre
ENV CLASSPATH .:${JAVA_HOME}/lib:${JRE_HOME}/lib
ENV HADOOP_HOME /usr/local/hadoop
ENV PATH ${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${JAVA_HOME}/bin:$PATH
# 当用ssh登录到容器后,上述设置的变量将失效,为了能够在ssh登陆时也能使用这些变量,将这些变量加入到root账户的.bash_profile文件中。
# 而且还有比较坑的是,export JRE_HOME=/usr/local/java/jdk/jre 不能写成 export JRE_HOME=${JAVA_HOME}/jre。其它的也是同理,所以一下配置中都是从绝对路径写起。(多么痛的领悟)
RUN ln -s /usr/local/java/jdk1.7.0_79 /usr/local/java/jdk && \
 echo "export JAVA_HOME=/usr/local/java/jdk" >> /root/.bash_profile && \
 echo "export JRE_HOME=/usr/local/java/jdk/jre" >> /root/.bash_profile && \
 echo "export CLASSPATH=.:/usr/local/java/jdk/lib:/usr/local/java/jdk/jre/lib" >> /root/.bash_profile && \
 echo "export HADOOP_HOME=/usr/local/hadoop" >> /root/.bash_profile && \
 echo "export PATH=/usr/local/hadoop/bin:/usr/local/hadoop/sbin:/usr/local/java/jdk/bin:$PATH" >> /root/.bash_profile

# 设置root账户的密码
RUN echo "root:1234" | chpasswd
# 设置容器启动时未指定要执行的时候,默认要的执行的命令
CMD ["/usr/sbin/sshd","-D"]

4. 自动化集群建立过程

sh脚本完成。

  其中只要在脚本执行过程中输入Datanode 的个数,并可以自动通过Dockerfile生成镜像文件,然后通过镜像建立容器集群,再配置网络。

#!/bin/bash

# 单个节点镜像的名称
IMAGE=hadoop-cluster
echo "The hostname of namenode will be master"
echo "How many datanode would you want? Please enter:"
# 读入Datanode的个数
read num
echo "the hostname of datanode will be:"
if [ -f "slaves" ]; then
  rm -f slaves
fi

# namenode 主机名直接配置为master,并将其加入到hadoop配置文件中的slaves文件,以及容器的配置文件hosts。
# (此时这两文件在本地,配置完成后会上传到容器中)
if [ -f "hosts" ]; then
  rm -f hosts
  echo "127.0.0.1    localhost" >> hosts
  echo "192.168.1.10 master" >> hosts
fi

# 配置hadoop中的slaves文件,说明有哪些是datanode
# 其中每一个datanode容器的主机名都为slave再接上一个数字,主机的ip所在的网络为: 192.168.1.0/24
for count in $(seq $num)
do
  echo "slave$count"
  echo "slave$count" >> slaves
  echo 192.168.1.1$count" "slave$count >> hosts
done

# 因为要重新建立镜像,所以停止以前的容器所用镜像名为hadoop-cluster的容器,并将这些容器删除
echo "stop and remove the relevant containers.."
names=(`docker ps -a | grep $IMAGE | awk '{print $1}'`)
for name in ${names[*]}
do
  echo $name
  docker stop $name
  docker rm $name
done

# 删除旧版的镜像名为hadoop-cluster镜像(如果存在)
cluster=`docker images | grep $IMAGE`
if [ -z "$cluster" ]; then
  echo "the $IMAGE image is not existed!"
else
  echo "removing the $IMAGE..."
  docker rmi $IMAGE
fi

# 通过上述的Dockerfile构建新的镜像名为hadoop-cluster的镜像
echo "build the $IMAGE image..."
docker build -t "$IMAGE" .

# 容器和主机可能会可能会与主机共享文件,先建立共享的文件夹(不同的容器的主机名对应不同的文件夹)
echo "creating the namenode master..."
if [ ! -d "/data/share" ]; then 
  mkdir -p /data/share
fi
if [ ! -d "/data/share/master" ]; then
  mkdir /data/share/master
fi

# 后边的配置会用到br0虚拟网桥,如果存在就先将其删除
ip link set br0 down
ip link delete br0
# 删除主机中的~/.ssh/known_hosts文件,不然可能会报这样的错误: IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!。。。rm -f  ~/.ssh/known_hosts
# 建立namenode容器,其主机名为master
docker run -itd -p 50070:50070 -p 8088:8088 --name=master --hostname=master --privileged=true --net=none -v /data/share/master:/opt/share  $IMAGE
# 通过pipework为容器设置固定IP(重新开启容器将不存在)
pipework br0 master 192.168.1.10/24@192.168.1.1

# 创建并配置datanode容器
echo "creating the datanodes.."
for count1 in $(seq $num) 
do
  nodename=slave$count1
  addr=192.168.1.1$count1
# 建立共享目录
  if [ ! -d "/data/share/$nodename" ]; then
    mkdir /data/share/$nodename
  fi
  docker run -itd --name=$nodename --hostname=$nodename --privileged=true --net=none -v /data/share/$nodename:/opt/share  $IMAGE
# 设置固定IP  
  pipework br0 $nodename $addr/24@192.168.1.1
done

# 为虚拟网桥br0添加IP,同时会增加通往192.168.1.0/24子网的路由。
ip addr add 192.168.1.1/24 broadcast +1 dev br0

# 先删除文件夹下的authorized_keys文件if [ -f "authorized_keys" ]; then
  rm -f authorized_keys
fi
# 先将每个主机中的id_rsa.pub中的内容先下载到到此文件夹下,然后将其中的内容加入到的authorized_keys文件中
# 然后在将authorized_keys放置到每个容器的 ~/.ssh/中,这样每个容器之间就可以完成免密码登陆
# 脚本download.sh和upload.sh中使用expect命令,来完成登陆过程中不用输入密码,在脚本中事先指定密码。

for ((i=0; i<=$num; i++))
do
  addr=192.168.1.1$i
  ./download.sh $addr ~/.ssh/id_rsa.pub ./rsa_tmp
  cat rsa_tmp >> authorized_keys
  rm -f rsa_tmp
done
# 将hosts以及authorized_keys文件复制到每个容器的指定位置中去
for ((i=0; i<=$num; i++))
do
  addr=192.168.1.1$i
  ./upload.sh $addr authorized_keys  ~/.ssh/
  ./upload.sh $addr hosts  /etc/hosts
done

5. scp 指定密码登陆,不用交互式的手动输入面

  scp下载文件:

#!/usr/bin/expect -f

# 第一个参数为ip地址
set addr [lrange $argv 0 0]
# 第二个参数为 要下载的文件的完整路径名
set source_name [lrange $argv 1 1]
# 保存到本地的位置或者是新名字
set dist_name [lrange $argv 2 2]
set password 1234

spawn scp root@$addr:$source_name ./$dist_name
set timeout 30
expect {
  "yes/no" { send "yes\r"; exp_continue}
  "password:" { send "$password\r" }
}
send "exit\r"
expect eof

  scp上传文件:

#!/usr/bin/expect -f

set addr [lrange $argv 0 0]
set source_name [lrange $argv 1 1]
set dist_name [lrange $argv 2 2]
set password 1234

spawn scp $source_name  root@$addr:$dist_name
set timeout 30
expect {
  "yes/no" { send "yes\r"; exp_continue}
  "password:" { send "$password\r" }
}
send "exit\r"
expect eof

6. 测试:

sh命令,然后输入想要的DataNode的数量,

    

  所建立的容器

  登陆到master格式化namenode。

[root@centos-docker ~]# ssh root@master
Warning: Permanently added 'master' (ECDSA) to the list of known hosts.
root@master's password:
[root@master ~]# hdfs namenode -format
16/08/12 20:57:07 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master/192.168.1.10
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
。。。。。。。。。。。。。。。

  开启hadoop

[root@master ~]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: Warning: Permanently added 'master,192.168.1.10' (ECDSA) to the list of known hosts.
master: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-namenode-master.out
slave2: Warning: Permanently added 'slave2,192.168.1.12' (ECDSA) to the list of known hosts.
slave3: Warning: Permanently added 'slave3,192.168.1.13' (ECDSA) to the list of known hosts.
slave1: Warning: Permanently added 'slave1,192.168.1.11' (ECDSA) to the list of known hosts.
slave3: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-slave3.out
slave2: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-slave2.out
slave1: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-slave1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-resourcemanager-master.out
slave2: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-slave2.out
slave1: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-slave1.out
slave3: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-slave3.out

 

7. 其它

构建hadoop集群的所有文件都已经上传到百度云中。解压之后之后直接在文件夹下执行./build-cluster.sh 就可以完成集群的搭建。

注意: 在系统中要事先安装好docker,并且有名为centos的镜像,并且安装了pipework,expect,以及iproute,bridge-utils软件。

build-cluster.sh文件中的 docker build -t "hadoop-cluster" . 前边的注释去掉