cloudera manager
https://archive.cloudera.com/cm6/6.0.0/ cdh
https://archive.cloudera.com/cdh6/6.0.0/
默认yum路径e17是
/var/cache/yum/x86_64/7/cloudera-manager/packages
一、主机环境设置
0、修改挂在磁盘目录
df -h
lsblk
lscpu
free -g
uname -a
lsb_release -a
mkdir /app
vi /etc/fstab
1、关闭防火墙
service iptables status
chkconfig iptables off
(测试环境防火墙运维均已关闭)
2、关闭 SELinux
cat /etc/selinux/config
SELINUX=disabled
3、修改主机名
vi /etc/sysconfig/network
分别修改如下:
4、修改hosts映射
vi /etc/hosts
此时重启系统就可以了
5、设置ssh免密
-------------root免密
cloudera1上生产秘钥对
ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
cloudera1、2、3上复制生产的秘钥
cd /root/.ssh
vi /root/.ssh/authorized_keys
-------------appuser免密
su appuser
ssh-keygen -t rsa -P '' -f /home/appuser/.ssh/id_rsa
cat /home/appuser/.ssh/id_rsa.pub >> /home/appuser/.ssh/authorized_keys
cloudera1、2、3上复制生产的秘钥
su appuser
cd /home/appuser/
chmod 700 ~/.ssh
cd .ssh
touch authorized_keys
chmod 640 authorized_keys
vi /home/appuser/.ssh/authorized_keys
mkdir /app/opt
mkdir /app/opt/appuser
chown -R appuser:appuser /app/opt/appuser
二、cdh部署
root/b1gd@te2017 cdhuser/cdhuser123!@#
1、时间同步配置(忽略运维已做好)
mv /etc/yum.repos.d/ /etc/yum.repos
yum install ntp
chkconfig ntpd on
chkconfig --list ntpd
2、部署cdh6
解压cm和MySQL驱动(忽略)
安装jdk
查看是否已有jdk并卸载
rpm -qa | grep jdk
yum -y remove java-1.6.0-openjdk-devel-1.6.0.0-1.66.1.13.0.el6.x86_64
rpm的jdk所有包均卸载掉
rpm -qa | grep gcj
yum -y remove java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64
--------------------------------------------------OSSCmMysql01--------------------------
1、jdk
rpm -ivh oracle-j2sdk1.8-1.8.0+update141-1.x86_64.rpm
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_141-cloudera
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=./:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
2、daemons
rpm -ivh cloudera-manager-daemons-6.0.0-530873.e17.x86_64.rpm
3、server
rpm -ivh cloudera-manager-server-6.0.0-530873.el7.x86_64.rpm
(跳转执行第6步)
rpm -ivh cloudera-manager-server-db-2-6.0.0-530873.e17.x86_64.rpm
4、agant
复制资源
cp /app/soft/cloudera-manager-agent-6.0.0-530873.el7.x86_64.rpm /var/cache/yum/x86_64/7/cloudera-manager/packages/
cp /app/soft/cloudera-manager.repo /etc/yum.repos.d/
检查
yum repolist all
yum clean all
安装
yum -y install cloudera-manager-agent
5、
复制驱动
注意复制到目标路径之后要授权
chmod 777 mysql-connector-java-5.1.43-bin.jar
mkdir -p /usr/share/java/
cp /app/soft/mysql-connector-java-5.1.43-bin.jar /opt/cloudera/cm/lib
cp /app/soft/mysql-connector-java-5.1.43-bin.jar /usr/share/java/mysql-connector-java.jar
6、centos 7安装Python
yum install centos-release-scl
yum install scl-utils
yum install python27
yum -y install chkconfig python bind-utils psmisc libxslt zlib sqlite cyrus-sasl-plain cyrus-sasl-gssapi fuse fuse-libs redhat-lsb postgresql* portmap mod_ssl openssl-devel python-psycopg2 MySQL-python
7、httpd
yum install httpd(已经部署)
8、初始化cdh的元数据
/opt/cloudera/cm/schema/scm_prepare_database.sh mysql bd_cmmetadata_db -h localhost -uroot -pdl2PA64i --scm-host localhost scm scm scm
9、修改serverhost为主节点主机名
vi /etc/cloudera-scm-agent/config.ini
添加
server_host=alicm.cloud.9ffox.com
复制agent
10、添加agent
()
执行
cd /app/soft/
rpm -ivh oracle-j2sdk1.8-1.8.0+update141-1.x86_64.rpm
source /etc/profile
java -version
cd /app/soft/
rpm -ivh cloudera-manager-daemons-6.0.0-530873.el7.x86_64.rpm
yum -y install cloudera-manager-agent
cp /app/soft/cloudera-manager.repo /etc/yum.repos.d/
mkdir -p /usr/share/java/
cp /app/soft/mysql-connector-java-5.1.43-bin.jar /usr/share/java/
yum -y install cloudera-manager-agent
网络盘可在以上停止(执行以下)
cp /app/soft/cloudera-manager-agent-6.0.0-530873.el7.x86_64.rpm /var/cache/yum/x86_64/7/cloudera-manager/packages/
11、在每个节点创建新用户(用yum不用创建)
主节点是
useradd --system --home=/var/run/cloudera-scm-server --no-create-home --shell=/bin/false --comment "Cloudera SCM User" cloudera-scm
--复制parcel包(sha1重名为sha)
cd /app/opt/cloudera
sha1sum CDH-6.0.0-1.cdh6.0.0.p0.537114-el7.parcel | awk '{ print $1 }' > CDH-6.0.0-1.cdh6.0.0.p0.537114-el7.parcel.sha
cp CDH-6.0.0-1.cdh6.0.0.p0.537114-el7.parcel.sha /opt/cloudera/parcel-repo
cp CDH-6.0.0-1.cdh6.0.0.p0.537114-el7.parcel /opt/cloudera/parcel-repo
cp manifest.json /opt/cloudera/parcel-repo
12、启动之前需要注意的细节
echo "JAVA_HOME=/usr/java/jdk1.8.0_141-cloudera" >> /etc/environment
source /etc/environment
cat /proc/sys/vm/swappiness
echo 0 > /proc/sys/vm/swappiness
source /proc/sys/vm/swappiness
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled
13、启动server和启动agent(用root启动)
主节点启动
service cloudera-scm-server restart
各个节点启动
service cloudera-scm-agent restart
http://172.31.3.175:7180/登录并测试admin/admin
http://172.31.3.175:7180/cmf/home(映射访问) 修改之后密码是
http://jvcm.cloud.9ffox.com/cmf/login?logout
172.31.3.175:7180
admin
admin
按照提示进行修改二进制本地安装包是
/app/cloudera/parcel-repo
添加本地安装目录是
/app/cloudera/parcels
修改密码admin/ivTm01ly
6、数据目录和日志目录
数据目录
/app/opt/cloudera/cloudera-data/zookeeper
日志目录
/app/opt/cloudera/cloudera-log/zookeeper
7、加载kafka至parcel中(添加额外parcels示例)
a、关闭集群
b、复制文件
mv KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel.sha1 KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel.sha
cp KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel.sha /mysql/opt/cloudera/cloudera/parcel-repo/
cp KAFKA-2.2.0-1.2.2.0.p0.68-el6.parcel /mysql/opt/cloudera/cloudera/parcel-repo/
把kafka的驱动复制在
KAFKA-1.2.0.jar
/mysql/opt/cloudera/cloudera/csd
把kafka的manifest.json复制在原先cdh的配置json中
c、启动集群并激活kafka的parcel切记启动cm之后再分配和激活
8安装hive的时候要把MySQL驱动cp在hive根目录的
chmod 777 mysql-connector-java-5.1.43-bin.jar
auxlib和lib里
cp /app/soft/mysql-connector-java-5.1.43-bin.jar /usr/share/java/mysql-connector-java.jar
授权appuser操作hdfs的权限
hadoop fs -ls /user
切换用户
(如果不能su执行 usermod -s /bin/bash hdfs)
su hdfs
hadoop fs -ls /user
hadoop fs -mkdir /user/appuser
hadoop fs -chmod 777 /user/appuser
hadoop fs -ls /user
exit
9、部署spark
在服务 Spark 上执行命令 Deploy Client Configuration 失败
/app/opt/cloudera/cm-6.12.0/run/cloudera-scm-agent/process/ccdeploy_spark-conf_etcsparkconf.cloudera.spark_on_yarn_7681643532944931906/logs
发现重新部署客户端这个错误
Error: JAVA_HOME is not set and could not be found
解决方案是如下
创建java的软连接
cd /usr
mkdir java
ln -s /app/opt/java /usr/java/default
10、卸载时候先在cdh页面删除各个组件,然后删除fuw
lsof |grep cm-6
查看不能删除的文件
然后rm
11、oozie部署
需要吧mysql驱动复制在oozie的lib下和libext下面
cp /app/soft/mysql-connector-java-5.1.43-bin.jar /usr/share/java/mysql-connector-java.jar
12、hbase测试
create 'test', {NAME => 'f', VERSIONS => 1}
put 'test','1','f:id','1'
put 'test','1','f:name','yg'
get 'test','1'
scan 'test'
12、启动hdfs时没启动来
./hadoop namenode -format
13、hdfs给hive授权
在cloudera manager启用high avalibility(使用/app/dfs/jnn)
su hdfs
hadoop fs -ls /user
hadoop fs -mkdir /user/appuser
hadoop fs -chmod 777 /user/appuser
cloudera manager
设置参数
修改hive元数据自动更新
14 kudu配置为满足impala-shell操作(impala-shell)
为 Tablet Server 启用 TLS/SSL(设置为启动)
为 Master 启用 TLS/SSL (设置为启动)
--Python插件(每台机器上都部署)
yum install gcc python-devel
yum install cyrus-sasl*
以下需要再扩容时注意
Service unavailable: Cannot initialize clock: Error reading clock. Clock considered unsynchronized
到kudu实例节点运行ntpstat 输出unsynchronised polling server every 64 s 发现节点未完成同步,
15、kafka监控
vim kafka-monitor-start.sh
java -Xms512M -Xmx512M -Xss1024K -XX:PermSize=256m -XX:MaxPermSize=512m -cp KafkaOffsetMonitor-assembly-0.2.0.jar com.quantifind.kafka.offsetapp.OffsetGetterWeb \
--port 8088 \
--zk cloudera24:2181,cloudera25:2182,cloudera26:2181 \
--refresh 10.seconds \
--retain 1.day >/dev/null 2>&1;
nohup ./kafka-monitor-start.sh &
16、kafka管理
(测试环境需外网访问设置)
advertised.host.name=cloudera5
listeners
./kafka-topics.sh --zookeeper web01:2181,web02:2181,web03:2181 --list
./kafka-topics.sh --create --zookeeper web01:2181,web02:2181,web03:2181 --replication-factor 3 --partitions 20 --topic dp_business_topic_pro
./kafka-topics.sh --create --zookeeper web01:2181,web02:2181,web03:2181 --replication-factor 3 --partitions 20 --topic dp_crawler_topic_pro
./kafka-topics.sh --create --zookeeper web01:2181,web02:2181,web03:2181 --replication-factor 3 --partitions 20 --topic dp_srcmysql_topic_pro
./kafka-topics.sh --create --zookeeper web01:2181,web02:2181,web03:2181 --replication-factor 3 --partitions 20 --topic test
./kafka-topics.sh --create --zookeeper web01:2181,web02:2181,web03:2181 --replication-factor 3 --partitions 20 --topic sf_intelligent_call
17、hue安装(Python和mysql插件)无需元数据
cp /app/soft/mysql-connector-java-5.1.43-bin.jar /usr/share/java/mysql-connector-java.jar
yum install libxml2-python
yum install -y python-lxml
centos安装必要的库文件
yum install krb5-devel cyrus-sasl-gssapi cyrus-sasl-deve libxml2-devel libxslt-devel mysql mysql-devel openldap-devel python-devel python-simplejson sqlite-devel
yum install cyrus-sasl-plain cyrus-sasl-devel cyrus-sasl-gssapi
在hbase添加线程角色
hbase server third
--安装python的prestosql的接口
yum -y install python-psycopg2
ln -s /usr/lib64/python2.7/site-packages/psycopg2 psycopg2
--安装http服务
yum install httpd
yum install cyrus-sasl-plain cyrus-sasl-devel cyrus-sasl-gssapi
hbase配置中启用
hbase.thrift.support.proxyuser
hbase.regionserver.thrift.http
hue配置中启用
将 Hue 服务器绑定到通配符地址勾选
登录的默认是admin/admin
18、impala
将 Impala Llama ApplicationMaster 绑定到通配符地址
将 选择zookeeper作为ha
---------------------------------------------------
20、远程复制
scp -r /etc/hosts merry-12:/etc/
scp -r /etc/hosts merry-13:/etc/
scp -r /etc/hosts merry-14:/etc/
scp -r /etc/hosts merry-15:/etc/
scp -r /etc/hosts merry-16:/etc/
scp -r /etc/hosts merry-17:/etc/
scp -r /etc/hosts merry-18:/etc/
scp -r /etc/hosts merry-19:/etc/
scp -r /etc/hosts merry-20:/etc/
scp -r /etc/hosts merry-21:/etc/
scp -r /etc/hosts merry-22:/etc/
scp -r /etc/hosts merry-23:/etc/
scp -r /etc/hosts merry-24:/etc/
scp -r /etc/hosts merry-25:/etc/
scp -r /etc/hosts merry-26:/etc/
scp -r /etc/hosts merry-27:/etc/
scp -r /etc/hosts merry-28:/etc/
scp -r /etc/hosts merry-29:/etc/
scp -r /etc/hosts merry-30:/etc/
scp -r /etc/hosts merry-31:/etc/
scp -r /etc/hosts merry-32:/etc/
scp -r /etc/hosts merry-33:/etc/
scp -r /etc/hosts merry-34:/etc/
scp -r /etc/hosts merry-35:/etc/
scp -r /etc/hosts merry-36:/etc/
scp -r /etc/hosts merry-37:/etc/
scp -r /etc/hosts merry-38:/etc/
scp -r /etc/hosts merry-39:/etc/
scp -r /etc/hosts merry-40:/etc/
scp -r /etc/hosts merry-41:/etc/
scp -r /etc/hosts merry-42:/etc/
CDH的refresh权限 cdh管理界面
转载本文章为转载内容,我们尊重原作者对文章享有的著作权。如有内容错误或侵权问题,欢迎原作者联系我们进行内容更正或删除文章。
![](https://ucenter.51cto.com/images/noavatar_middle.gif)
提问和评论都可以,用心的回复会被更多人看到
评论
发布评论
相关文章
-
第二十七节 搭建大数据平台CDH6.3.2
CDH集群6.3.2版本安装
centos cloudera mysql -
cdh中怎么在管理界面对hive重启 cdh管理界面功能介绍
1.cloudera manager 的概念 简单来说,Cloudera Manager是一个拥有集群自动化安装、中心化管理、集群监控、报警功能的一个工具(软件),使得安装集群从几天的时间缩短在几个小时内,运维人员从数十人降低到几人以内,极大的提高集群管理的效率。 2.cloudera manager 的功能 cloudera manager
cdh中怎么在管理界面对hive重启 Cloudera Manager h5 数据库 cloudera