资源描述
摘要:
本文记录了Hadoop2.2.0版本多节点集群安装过程,并做了基本配置,启动运行和测试了一个单词统计例子。
环境说明:
基于Windows下的VMware Player4.0.3中的Ubuntu12.04-64server安装,先把基础软件安装到一个虚拟机中,然后拷贝两份再配置下即可。三台机器的分工如下:
Hadoop1(Master): NameNode/ResouceManager
Hadoop2(Slave):DataNode/NodeManager
Hadoop3(Slave): DataNode/NodeManager
假定三台虚拟机的IP地址如下,后面会用到。
Hadoop1:192.168.128.130
Hadoop2:192.168.128.131
Hadoop3:192.168.128.132
1、环境准备:
下载免费的VMware Player并安装好;
下载 免费的Ubuntu 12.04 server版并在VMware中安装好;
2、基础安装:
执行如下命令升级部分软件和把ssh安装好
(1)sudo apt-get update;
(2)sudo apt-get upgrade;
(3)sudo apt-get install openssh-server;
安装Oracle JDK
通过webupd8team自动安装,执行命令如下:
(1) sudo apt-get install python-software-properties
(2) sudo add-apt-repository ppa:webupd8team/java
(3) sudo apt-get update
(4) sudo apt-get install oracle-java6-installer
创建hadoop用户
(1) sudo addgroup hadoop
(2) sudo adduser –ingroup hadoop hduser
编辑/etc/sudoers编辑文件,在root ALL=(ALL)ALL行下添加hduser ALL=(ALL)ALL。如果不添加这行,hduser将不能执行sudo操作。
★★★★★★★★★★★★★★★★★★★★★★★★★★★★★★★★
注:以下操作均用hduser用户登录后操作。
★★★★★★★★★★★★★★★★★★★★★★★★★★★★★★★★
3、公共安装:
下载Hadoop2.2.0版本
(1) $ cd /home/hduser
(2) $ wget
(3) $ tar zxf hadoop-2.2.0.tar.gz
(4) $ mv hadoop-2.2.0 hadoop
配置Hadoop:
(1) 配置/home/hduser/hadoop/etc/hadoop/hadoop-env.sh
替换exportJAVA_HOME=${JAVA_HOME}为如下:
export JAVA_HOME=/usr/lib/jvm/java-6-oracle
(2) 配置/home/hduser/hadoop/etc/hadoop/core-site.xml,
在<configuration>中添加如下:
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hduser/hadoop/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://192.168.128.130:8010</value>
<description>The name of the default file system. A URI whose
schemeand authority determine the FileSystem implementation. The
uri’ sscheme determines the config property (fs.SCHEME.impl) naming
theFileSystem implementation class. Theuri’ s authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
注意:以下两点务必确保正确,否则后面会出错。
a. 需执行mkdir home/hduser/hadoop/tmp创建这个临时目录;
b. 这个fd.default.name值的IP地址为NameNode的地址,即Hadoop1。
配置/home/hduser/hadoop/etc/hadoop/mapred-site.xml
(1) cp /home/hduser/hadoop/etc/hadoop/mapred-site.xml.template /home/hduser/hadoop/etc/hadoop/mapred-site.xml
(2) 在<configuration>中添加如下:
<property>
<name>mapred.job.tracker</name>
<value>192.168.128.130:54311</value>
<description>The host and port that theMapReduce job tracker runs
at. If”local”, thenjobs are runin-process as a single map and reducetask.
</description>
</property>
配置/home/hduser/hadoop/etc/hadoop/hdfs-site.xml
在<configuration>中添加如下:
<property>
<name>dfs.replication</name>
<value>2</value>
<description>Default block replication.
Theactual number of replications can be specified when the file is created.
Thedefault is used if replication is not specified in create time.
</description>
</property>
4、整体安装
将上面安装和配置好的虚拟机拷贝两份,即Hadoop2和Hadoop3。
分别修改三台虚拟机的/etc/hostname中的内容改为相应的主机名,即
hadoop1的hostname为hadoop1,其他类推。
修改完成后需要重启,并通过命令hostname确认已经生效。
分别检查并修改三台虚拟机的/etc/hosts中的内容,确保包含如下配置:
192.168.128.130 hadoop1
192.168.128.131 hadoop2
192.168.128.132 hadoop3
为三台虚拟机之间建立SSH信任以便是实现无需密码登陆。
(1) 将以下命令分别在三台机子上做一遍,分别将各台机子上的.ssh/id_rsa.pub的内容追加到其他两台的.ssh/authorized_keys中,这样三台机子相互访问就不需要输入密码了。可通过ssh hadoop1来测试。
分别在三台机器执行:
ssh-keygen -t rsa -P ""
scp .ssh/id_rsa.pub hduser@192.168.128.131:/home/hduser/.ssh/id_rsa_1.pub
scp .ssh/id_rsa.pub hduser@192.168.128.132:/home/hduser/.ssh/id_rsa_1.pub
等待其他机器的id_rsa.pub都copy过来后执行一下内容
cat .ssh/id_rsa.pub > .ssh/authorized_keys
cat .ssh/id_rsa_2.pub >> .ssh/authorized_keys
cat .ssh/id_rsa_3.pub >> .ssh/authorized_keys
==========================================
ssh-keygen -t rsa -P ""
scp .ssh/id_rsa.pub hduser@192.168.128.130:/home/hduser/.ssh/id_rsa_2.pub
scp .ssh/id_rsa.pub hduser@192.168.128.132:/home/hduser/.ssh/id_rsa_2.pub
等待其他机器的id_rsa.pub都copy过来后执行一下内容
cat .ssh/id_rsa.pub > .ssh/authorized_keys
cat .ssh/id_rsa_1.pub >> .ssh/authorized_keys
cat .ssh/id_rsa_3.pub >> .ssh/authorized_keys
==========================================
ssh-keygen -t rsa -P ""
scp .ssh/id_rsa.pub hduser@192.168.128.130:/home/hduser/.ssh/id_rsa_3.pub
scp .ssh/id_rsa.pub hduser@192.168.128.131:/home/hduser/.ssh/id_rsa_3.pub
等待其他机器的id_rsa.pub都copy过来后执行一下内容
cat .ssh/id_rsa.pub > .ssh/authorized_keys
cat .ssh/id_rsa_1.pub >> .ssh/authorized_keys
cat .ssh/id_rsa_2.pub >> .ssh/authorized_keys
分别修改各台机子的$HADOOP_HOME/etc/hadoop/slaves,这里$HADOOP_HOME为你的hadoop安装目录。Slaves的内容如下:
hadoop2
hadoop3
5、运行Hadoop
注:所有的运行只需要在hadoop1的master节点即可。系统会自动登录到其他两台去启动相应的节点。
在初次运行Hadoop的时候需要初始化Hadoop文件系统,命令如下:
cd /home/hduser/hadoop/bin
./hdfs namenode -format
如果执行成功,你会在日志中(倒数几行)找到如下成功的提示信息:
common.Storage: Storage directory /home/hduser/hadoop/tmp/hadoop-hduser/dfs/namehas been successfully formatted.
运行命令如下:
cd home/hduser/hadoop/sbin/
$./start-dfs.sh
$./start-yarn.sh
启动完之后可分别用jps命令查看各个机子的进程是否正常,如下:
$jps
1777 ResourceManager
1464 NameNode
1618 SecondaryNameNode
hduser@hadoop2:~$ jps
1264 DataNode
1344 NodeManager
hduser@hadoop3:~$ jps
1289 NodeManager
1209 DataNode
6、查看Hadoop资源管理器
http://192.168.217.128:8088/
7、测试Hadoop
cd /home/hduser
$wget http://www.gutenberg.org/cache/epub/20417/pg20417.txt
$cd hadoop
$ bin/hdfs dfs -mkdir /tmp
$ bin/hdfs dfs -copyFromLocal /home/hduser/pg20417.txt /tmp
bin/hdfs dfs -ls /tmp
$ bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /tmp/ /tmp-output
如果一切正常的话,会输入相应的结果,可以从屏幕输出看到。
8、停止Hadoop
若停止hadoop,依次运行如下命令:
$./stop-yarn.sh
$./stop-dfs.sh
9、集群安装与单机安装的区别分析
core-site.xml中配置的fs.default.name值的IP地址必须为Master节点,本文为Hadoop1节点;
hdfs-site.xml中配置的dfs.replication值需要与实际的DataNode节点数一致,本文为2;
mapred-site.xml中配置的mapred.job.tracker值的IP地址必须为Master节点,本文为Hadoop1节点;
slaves文件必须配置上实际的slaves节点,本文为hadoop2和hadoop3;
每个主机的/etc/hostname和/etc/hostname必须做相应的配置以方便集群内部相互识别;
必须在集群内部用ssh建立起信任。
以上安装过程中还是出现了一些问题,但基本都通过baidu和google解决了。有个错误花费了较多时间,这里记录下,以供参考。
错误现象:13/10/2807:19:03 WARN hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException):File /tmp/pg20417.txt._COPYING_ could only be replicated to 0 nodes instead ofminReplication (=1). There are 0datanode(s) running and no node(s) are excluded in this operation.
发生地方:执行bin/hdfs dfs -copyFromLocal /home/hduser/pg20417.txt /tmp时
原因定位:后来经过反复查看,是因为fs.default.name的值中的IP地址配置成 localhost了,导致系统找不到hdfs.是在datanode的日志中发现这个错误的,日志如下:
2013-10-28 07:33:55,963 WARNorg.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server:localhost/127.0.0.1:8010
解决办法:将fs.default.name中的IP地址改为192.168.128.130,即你的master节点的IP地址。
更多Hadoop相关信息见Hadoop 专题页面
展开阅读全文