资源描述
HUE 安装手册
本文档重要参考cloudera和hortonworks的安装文档,但这两个文档重要是针对自己产品的安装,有些配置是产品特有的配置,特别是cloudera的安装文档,hortonwork的文档不错,本手册重要是参考hortonwork的安装文档而来,并增长了thrift的插件配置和安装。
以上是hue和hadoop组件的整体结构图:
1. 安装环节:
安装前,最佳先阅读下以上提到的两篇安装手册,本手册是安装中软国际的安装包中安装的hue,也可以把yum源地址配置成hortonworks的安装包,这里的说明的yum安装源是中软国际的安装包。
主节点:master.hadoop
jobtracker节点:node4.hadoop
本手册是把hue安装在主节点master.hadoop上
2. 配置hadoop(通过ambari界面更改如下配置文献)
1. hdfs-site.xml
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
2. core-site.xml
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>
3. webhcat-site.xml
<property>
<name>webhcat.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>webhcat.proxyuser.hue.groups</name>
<value>*</value>
</property>
4. oozie-site.xml
<property>
<name>oozie.service.ProxyUserService.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>oozie.service.ProxyUserService.proxyuser.hue.groups</name>
<value>*</value>
</property>
5. 配置thrift插件
1.假如安装hue server和JobTracker在同一个节点上,不需要复制插件包,该包已经在lib下了;
2.假如安装hue server和JobTracker不在同一个节点上复master.hadoop的 /usr/lib/hadoop/lib/hue-plugins-2.2.0-SNAPSHOT.jar 至jobtracker节点node4.hadoop的 /usr/lib/hadoop/lib下
scp /usr/lib/hadoop/lib p: /usr/lib/hadoop/lib
假如没有该包,yum install hue-plugins
3 .管理界面配置插件,修改mapred-site.xml
<!-- Enable Hue plugins -->
<property>
<name>mapreduce.jobtracker.plugins</name><!--HUE文档,规定设立mapred.jobtracker.plugins,但设立后不生效,通过查询源码发现实际是mapreduce.jobtracker.plugins-->
<value>org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin</value>
</property>
<property>
<name>jobtracker.thrift.address</name>
<value>node4.hadoop:9290</value>
</property>
配置以上信息后,重启服务。
3. 安装 Hue
yum install hue
4. 配置 Hue
需要vi /etc/hue/conf.empty/hue.ini,根据当前集群的信息,配置如下信息:
5.1 配置Web Server
1. Hue HTTP Address.
# Webserver listens on this address and port
http_host=0.0.0.0
http_port=8000
2. 指定Secret Key.
输入长度在30-60的字符串:值随便定义
secret_key=secretkeysecretkeysecretkeysecretkey
3. 配置Hue for SSL.(一方面要安装openssl ,默认已经安装),配置后,访问hue需要使用https://
Install pyOpenSSL in order to configure Hue to serve over HTTPS. To install pyOpenSSL, from the root of your Hue installation path, complete the following instructions:
生成key
$cd /etc/hue/
### Create a key
$openssl genrsa 1024 > host.key
### Create a self-signed certificate
$openssl req -new -x509 -nodes -sha1 -key host.key > host.cert
把生成的的两个文献信息配置到如下属性:
ssl_certificate=/etc/hue/host.cert
ssl_private_key=/etc/hue/host.key
5.2. Configure Hadoop
1. 配置HDFS Cluster.
# Configuration for HDFS NameNode
# ------------------------------------------------------------------------
[[hdfs_clusters]]
[[[default]]]
# Enter the filesystem uri
fs_defaultfs=hdfs://master.hadoop:8020
# Use WebHdfs/HttpFs as the communication mechanism. To fallback to
# using the Thrift plugin (used in Hue 1.x), this must be uncommented
# and explicitly set to the empty value.
webhdfs_url=:50070/webhdfs/v1/
## security_enabled=true
# Settings about this HDFS cluster. If you install HDFS in a
# different location, you need to set the following.
# Defaults to $HADOOP_HDFS_HOME or /usr/lib/hadoop-hdfs
hadoop_hdfs_home=/usr/lib/hadoop/lib
# Defaults to $HADOOP_BIN or /usr/bin/hadoop
hadoop_bin=/usr/bin/hadoop
# Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
hadoop_conf_dir=/etc/hadoop/conf
2. Configure YARN (MR2) Cluster.
# Configuration for MapReduce JobTracker
# ------------------------------------------------------------------------
[[mapred_clusters]]
[[[default]]]
# Enter the host on which you are running the Hadoop JobTracker
jobtracker_host=node4.hadoop
# The port where the JobTracker IPC listens on
jobtracker_port=9001
# Thrift plug-in port for the JobTracker
thrift_port=9290
# Whether to submit jobs to this cluster
submit_to=true
# Job tracker kerberos principal
## jt_kerberos_principal=jt
## security_enabled=true
# Settings about this MR1 cluster. If you install MR1 in a
# different location, you need to set the following.
# Defaults to $HADOOP_MR1_HOME or /usr/lib/hadoop-0.20-mapreduce
hadoop_mapred_home=/usr/lib/hadoop/lib
# Defaults to $HADOOP_BIN or /usr/bin/hadoop
hadoop_bin=/usr/bin/hadoop
# Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
hadoop_conf_dir=/etc/hadoop/conf
4. 3. 配置Beeswax
beeswax_server_host=master.hadoop
beeswax_server_port=8002
beeswax_meta_server_host=localhost
beeswax_meta_server_port=8003
5. 4. Configure JobDesigner and Oozie
# The URL where the Oozie service runs on. This is required in order for
# users to submit jobs.
oozie_url=:11000/oozie (master.hadoop为oozie安装节点)
## security_enabled=true
# Location on HDFS where the workflows/coordinator are deployed when submitted.
## remote_deployement_dir=/user/hue/oozie/deployments
6. 6. Configure WebHCat
templeton_url=":50111/templeton/v1/" (master.hadoop为WebHcat的安装节点)
5. 启动Hue
execute the following command on the Hue Server:
/etc/init.d/hue start
hue服务启动后,访问hue界面:
:8000/
用户名:hue 密码:hue
6. 假如要把hue的内置数据库SQLLite换成mysql,参考文档:
cloudrea文档:Hue Installtion --->Administering HUe 文档
展开阅读全文