第一步配置jdk环境变量

[root@xuegod Java]#tar -xf jdk-8u66-linux-x64.tar.gz -C /usr/src/

[root@xuegodJava]# cd !$

cd /usr/src/

[root@xuegod src]#cd jdk1.8.0_66/


[root@xuegod jdk1.8.0_66]# vim /etc/profile

export JAVA_HOME=/usr/src/jdk1.8.0_66/   #这两行是增加的内容

export PATH=$JAVA_HOME/bin:$PATH       #

"/etc/profile" 80L, 1868C 已写入

[root@xuegod jdk1.8.0_66]# source !$           #使文件生效

source /etc/profile

[root@xuegod jdk1.8.0_66]# java -version       #验证jdk配置是否成功

java version "1.8.0_66"

Java(TM) SE Runtime Environment (build1.8.0_66-b17)

Java HotSpot(TM) 64-Bit Server VM (build25.66-b17, mixed mode)


出现上述结果证明jdk配置成功。

设置主机名

[hadoop@xuegodhadoop]$ vim /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=hadoop                        #这里是主机名

NTPSERVERARGS=iburst

注意如果主机名与IP地址不绑定的话,在格式化的时候会给出警告如下

unable to determine local hostname -falling back to localhos

可能会出现如下的错误:

SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: xuegod

解决办法就是使用 hostname -f 命令看是否会返回到主机名,像下面

[root@hadoop ~]# hostname -f
hadoop

否则的话按照上述方式修改过/etc/sysconfig/network/后重启即可。

绑定主机IP

[root@xuegodhadoop-2.4.1]# vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4localhost4.loca

ldomain4

::1         localhost localhost.localdomainlocalhost6 localhost6.loca

ldomain6

192.168.1.65hadoop

第二步配置SSH及添加hadoop用户实现免密码登录

[root@xuegod ~]# groupadd hadoop

[root@xuegod ~]# useradd -g hadoop hadoop

[root@xuegod ~]# echo 123456 | passwd--stdin hadoop

Changing password for user hadoop.

passwd: all authentication tokens updatedsuccessfully.

[root@xuegod ~]# su - hadoop

[hadoop@xuegod ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key(/home/hadoop/.ssh/id_rsa): #直接回车

Created directory '/home/hadoop/.ssh'.

Enter passphrase (empty for no passphrase):                            #直接回车

Enter same passphrase again:                                                     #直接回车

Your identification has been saved in/home/hadoop/.ssh/id_rsa.

Your public key has been saved in/home/hadoop/.ssh/id_rsa.pub.

The key fingerprint is:                                                                #直接回车

68:26:8b:0c:0d:f4:2e:e3:a4:f1:5a:02:bc:25:e7:37hadoop@xuegod

The key's randomart p_w_picpath is:

+--[ RSA 2048]----+

| .               |

|. .              |

|. .             |

|.o.   .         |

|+*.+. + S        |

|==O. =           |

|oo=..E           |

| + . .          |

|.                |

+-----------------+

[hadoop@xuegod ~]$ ssh-copy-id localhost            #拷贝公钥到免密码登录的服务器

The authenticity of host 'localhost (::1)'can't be established.

RSA key fingerprint is21:47:23:f9:04:8e:7a:fc:45:8b:1b:d8:9f:74:26:a9.

Are you sure you want to continue connecting(yes/no)? yes    #输入yes

Warning: Permanently added 'localhost'(RSA) to the list of known hosts.

hadoop@localhost's password:                 #首次需要输入密码,以后就不需要了

Now try logging into the machine, with"ssh 'localhost'", and check in:

 

 .ssh/authorized_keys

 

to make sure we haven't added extra keysthat you weren't expecting.

 

[hadoop@xuegod ~]$ ssh localhost        #测试

[hadoop@xuegod ~]$ exit

logout

Connection to localhost closed.

第三步.配置hadoop

[root@xuegod hadoop]# tar -xfhadoop-2.4.1.tar.gz -C /usr/src/

[root@xuegod hadoop]# cd !$

cd /usr/src/

[root@xuegod src]# cd hadoop-2.4.1/

[root@xuegod hadoop-2.4.1]# chown -Rhadoop:hadoop /usr/src/hadoop-2.4.1/

[root@xuegod hadoop-2.4.1]# chmod 775/usr/src/hadoop-2.4.1/

[root@xuegod hadoop-2.4.1]# ll -d !$

ll -d /usr/src/hadoop-2.4.1/

drwxrwxr-x 9 hadoop hadoop 4096 Jun 21  2014 /usr/src/hadoop-2.4.1/

3-1.配置hadoop2.4.1需要修改5个配置文件

第一个:hadoop-env.sh

[hadoop@xuegod hadoop]$ vim hadoop-env.sh

export JAVA_HOME=/usr/src/jdk1.8.0_66/

第二个:core-site.xml

<!-- 指定HADOOP所使用的文件系统schema(URI),HDFS的(NameNode)的地址-->

       <property>

           <name>fs.defaultFS</name>

           <value>hdfs://hadoop:9000</value>

       </property>

<!-- 指定hadoop运行时产生文件的存储目录-->

       <property>

           <name>hadoop.tmp.dir</name>

           <value>/home/hadoop/hadoop-2.4.1/tmp</value>

    </property>

第三个:hdfs-site.xml   hdfs-default.xml 

<configuration>

<!-- 指定HDFS副本的数量 -->

       <property>

       <name>dfs.replication</name>

       <value>1</value>

       </property>

<configuration>

第四个:mapred-site.xml (mv mapred-site.xml.template mapred-site.xml)

[hadoop@xuegod hadoop]$ cp mapred-site.xml.template mapred-site.xml

<!-- 指定mr运行在yarn -->

    <property>

       <name>mapreduce.framework.name</name>

       <value>yarn</value>

    </property>

</configuration>

第五个:yarn-site.xml

 

<configuration>

 

<!-- Site specific YARN configurationproperties -->

<!-- 指定YARN的老大(ResourceManager)的地址 -->

<property>

       <name>yarn.resourcemanager.hostname</name>

       <value>hadoop</value>

</property>

<!-- reducer获取数据的方式-->

<property>

       <name>yarn.nodemanager.aux-services</name>

       <value>mapreduce_shuffle</value>

</property>

</configuration>

3.2将hadoop添加到环境变量

   

    vim /etc/proflie

       export JAVA_HOME=/usr/src/jdk1.8.0_66/

export HADOOP_HOME=/usr/src/hadoop-2.4.1/

export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

[hadoop@xuegod hadoop]$ vim /etc/profile

[hadoop@xuegod hadoop]$ source !$

source /etc/profile

[hadoop@xuegod hadoop]$ hadoop version

Hadoop 2.4.1

Subversionhttp://svn.apache.org/repos/asf/hadoop/common -r 1604318

Compiled by jenkins on 2014-06-21T05:43Z

Compiled with protoc 2.5.0

From source with checksum bb7ac0a3c73dc131f4844b873c74b630

This command was run using/usr/src/hadoop-2.4.1/share/hadoop/common/hadoop-common-2.4.1.jar

 

 

 

3.3格式化namenode是对namenode进行初始化

       hdfsnamenode -format (hadoop namenode -format)

[hadoop@hadoop ~]$ hdfs namenode -format

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode athadoop/192.168.1.65

************************************************************/

看到这条信息说明格式化成功!!!

3.4启动hadoop

       先启动HDFS

        [hadoop@hadoop~]$ start-dfs.sh

      

       再启动YARN

       [hadoop@hadoop~]$ start-yarn.sh

 

3.5验证是否启动成功

       使用jps命令验证

[hadoop@hadoop ~]$ jps

3089 NodeManager

2995 ResourceManager

3124 Jps

2662 DataNode

2570 NameNode

2843 SecondaryNameNode

http://192.168.1.65:50070/HDFS 管理界面)

 

http://192.168.1.65:8088/clusterMR管理界面