- # Configuration of the "dfs" context for ganglia
- # Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
- # dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext
- dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
- dfs.period=10
- dfs.servers=239.2.11.71:8649 /*该地址为必须为广播地址,否则采集不到数据!!
- # Configuration of the "mapred" context for ganglia
- # Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
- # mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext
- mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
- mapred.period=10
- mapred.servers=239.2.11.71:8649
- # Configuration of the "jvm" context for ganglia
- jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
- jvm.period=10
- jvm.servers=239.2.11.71:8649
把配置文件分发到各个datanode节点上,然后重启hadoop集群,就可以在Ganglia的监控页面中看到Hadoop各个节点的运行情况。HBase的监控同样照此处理。