一、keytab使用
1、查看pricipal:
klist -kte **.keytab
2、认证keytab:
kinit -kt **.keytab -p **@**.COM
3、查看是否认证成功
klist

4、查询ldap生成的用户:
ldapsearch -x -b "dc=citic,dc=com" "(uid=username)"

二、hadoop常用命令:
1、kill掉8088页面正在running的job:
yarn application -kill application_1544615850246_0046
2、提交MapReduce的任务:
(1)3.1:
hadoop jar /usr/hdp/3.1.0.0-78/hadoop-mapreduce/hadoop-mapreduce-examples-3.1.1.3.1.0.0-78.jar wordcount  -D mapreduce.job.queuename=6528cce9-ed4e-42c2-b178-4b38d1e7e930 /user/test/input/20190919.txt /user/test/output/test 
(2)2.6:
hadoop jar /usr/hdp/2.6.0.3-8/hadoop-mapreduce/hadoop-mapreduce-examples-2.7.3.2.6.0.3-8.jar wordcount -D mapreduce.job.queuename=open-default-cluster26  /user/test/input/201911261515.txt   /user/test/output/test1

3、提交spark的任务:
2.6:
/usr/hdp/2.6.0.3-8/spark/bin/spark-submit --class  org.apache.spark.examples.SparkPi --master yarn-cluster --num-executors 1 --driver-memory 512m --executor-memory 512m --executor-cores 1 --queue open-default-cluster26 --keytab /run/testdata/test_data/**.keytab --principal  **.principal /usr/hdp/2.6.0.3-8/spark/lib/spark-examples-1.6.3.2.6.0.3-8-hadoop2.7.3.2.6.0.3-8.jar  10

4、kafka常用命令

hdp2.6

通过zookeeper形式,查看kafka集群中的topic

./kafka-topics.sh -zookeeper host:2181,host:2181,host:2181 -list

通过控制台生产消息

./ --broker-list host:6667,host:6667,host:6667 --topic wang_test01_20190909_1568008787736 --security-protocol SASL_PLAINTEXT

通过文件的形式生产消息

cat /root/test/20190919.txt |./ --broker-list host1:6667,host2:6667,host3:6667 --topic _name --security-protocol SASL_PLAINTEXT

消费消息

./kafka-console-consumer.sh --bootstrap-server host1:6667,host2:6667,host3:6667 --topic topic_name --from-beginning --security-protocol SASL_PLAINTEXT

将消息内容发送至指定文件

/kafka-console-consumer.sh --bootstrap-server host1:6667,host2:6667,host3:6667 --topic topic_name --from-beginning --security-protocol SASL_PLAINTEXT >>/root/test/20190919.txt

5、Hdfs常用命令

查看HDFS目录

hadoop fs -ls /

上传文件

hadoop fs -put localdata  hdfs_path

删除hdfs上的文件:(跳过垃圾箱)
hadoop fs -rm -r -skipTrash hdfs_path