文章目录


0x00 文章内容


  1. 通过SequenceFile合并小文件
  2. 检验结果

说明:Hadoop集群中,元数据是交由NameNode来管理的,每个小文件就是一个split,会有自己相对应的元数据,如果小文件很多,则会对内存以及NameNode很大的压力,所以可以通过合并小文件的方式来进行优化。合并小文件其实可以有两种方式:一种是通过Sequence格式转换文件来合并,另一种是通过CombineFileInputFormat来实现。

此处选择SequeceFile类型是因为此格式为二进制格式,而且是key-value类型,我们在合并小文件的时候,可以利用此特性,将每个小文件的名称做为key,将每个小文件里面的内容做为value。

0x01 通过SequenceFile合并小文件

1. 准备工作

a. 我的HDFS上有四个文件:

[hadoop-sny@master ~]$ hadoop fs -ls /files/
Found 4 items
-rw-r--r-- 1 hadoop-sny supergroup 39 2019-04-18 21:20 /files/put.txt
-rw-r--r-- 1 hadoop-sny supergroup 50 2019-12-30 17:12 /files/small1.txt
-rw-r--r-- 1 hadoop-sny supergroup 31 2019-12-30 17:10 /files/small2.txt
-rw-r--r-- 1 hadoop-sny supergroup 49 2019-12-30 17:11 /files/small3.txt

内容对应如下,其实内容可以随意:

shao nai yi
nai nai yi yi
shao nai nai
hello hi hi hadoop
spark kafka shao
nai yi nai yi
hello 1
hi 1
shao 3
nai 1
yi 3
guangdong 300
hebei 200
beijing 198
tianjing 209

b. 除了在Linux上创建然后上传外,还可以直接以流的方式输入进去,如​​small1.txt​​:

​hadoop fs -put - /files/small1.txt​

输入完后,按​​ctrl​​​ + ​​D​​ 结束输入。

2. 完整代码

a. ​​SmallFilesToSequenceFileConverter​​完整代码

package com.shaonaiyi.hadoop.filetype.smallfiles;

import com.shaonaiyi.hadoop.utils.FileUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat;

import java.io.IOException;
/**
* @Author shaonaiyi@163.com
* @Date 2019/12/30 16:29
* @Description 通过SequenceFile合并小文件
*/
public class SmallFilesToSequenceFileConverter {

static class SequenceFileMapper extends Mapper<NullWritable, BytesWritable, Text, BytesWritable> {
private Text fileNameKey;

@Override
protected void setup(Context context) {
InputSplit split = context.getInputSplit();
Path path = ((FileSplit) split).getPath();
fileNameKey = new Text(path.toString());
}

@Override
protected void map(NullWritable key, BytesWritable value, Context context) throws IOException, InterruptedException {
context.write(fileNameKey, value);
}
}

public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
Job job = Job.getInstance(new Configuration(), "SmallFilesToSequenceFileConverter");

job.setJarByClass(SmallFilesToSequenceFileConverter.class);

job.setInputFormatClass(WholeFileInputFormat.class);

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(BytesWritable.class);
job.setOutputFormatClass(SequenceFileOutputFormat.class);

job.setMapperClass(SequenceFileMapper.class);

FileInputFormat.addInputPath(job, new Path(args[0]));

String outputPath = args[1];
FileUtils.deleteFileIfExists(outputPath);
FileOutputFormat.setOutputPath(job, new Path(outputPath));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}

}

b. ​​WholeFileInputFormat​​完整代码

package com.shaonaiyi.hadoop.filetype.smallfiles;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import java.io.IOException;

/**
* @Author shaonaiyi@163.com
* @Date 2019/12/30 16:34
* @Description 实现WholeFileInputFormat类
*/
public class WholeFileInputFormat extends FileInputFormat<NullWritable, BytesWritable> {

@Override
protected boolean isSplitable(JobContext context, Path filename) {
return false;
}

@Override
public RecordReader<NullWritable, BytesWritable> createRecordReader(InputSplit inputSplit, TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {
WholeFileRecordReader reader = new WholeFileRecordReader();
reader.initialize(inputSplit, taskAttemptContext);
return reader;
}
}

c. ​​WholeFileRecordReader​​完整代码

package com.shaonaiyi.hadoop.filetype.smallfiles;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;

import java.io.IOException;

/**
* @Author shaonaiyi@163.com
* @Date 2019/12/30 16:35
* @Description 实现WholeFileRecordReader类
*/
public class WholeFileRecordReader extends RecordReader<NullWritable, BytesWritable> {

private FileSplit fileSplit;
private Configuration configuration;
private BytesWritable value = new BytesWritable();
private boolean processed = false;

@Override
public void initialize(InputSplit inputSplit, TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {
this.fileSplit = (FileSplit)inputSplit;
this.configuration = taskAttemptContext.getConfiguration();
}

@Override
public boolean nextKeyValue() throws IOException, InterruptedException {
if (!processed) {
byte[] contents = new byte[(int)fileSplit.getLength()];
Path file = fileSplit.getPath();
FileSystem fs = file.getFileSystem(configuration);
FSDataInputStream in = null;
try {
in = fs.open(file);
IOUtils.readFully(in, contents, 0, contents.length);
value.set(contents, 0, contents.length);
} finally {
IOUtils.closeStream(in);
}
processed = true;
return true;
}
return false;
}

@Override
public NullWritable getCurrentKey() throws IOException, InterruptedException {
return NullWritable.get();
}

@Override
public BytesWritable getCurrentValue() throws IOException, InterruptedException {
return value;
}

@Override
public float getProgress() throws IOException, InterruptedException {
return processed ? 1.0f : 0.0f;
}

@Override
public void close() throws IOException {

}
}

0x02 检验结果

1. 启动HDFS和YARN

​start-dfs.sh​

​start-yarn.sh​

2. 执行作业

a. 打包并上传到master上执行,需要传入两个参数

yarn jar ~/jar/hadoop-learning-1.0.jar com.shaonaiyi.hadoop.filetype.smallfiles.SmallFilesToSequenceFileConverter /files /output
3. 查看执行结果

a. 生成了一份文件

通过SequenceFile实现合并小文件(调优技能)_大数据

b. 查看到里面的内容如下,但内容很难看

通过SequenceFile实现合并小文件(调优技能)_调优_02

c. 用text查看文件内容,可看到key为文件名,value为二进制的里面的内容。通过SequenceFile实现合并小文件(调优技能)_合并小文件_03

0xFF 总结

  1. Input的路径有4个文件,默认会启动4个mapTask,其实我们可以通过​​CombineTextInputFormat​​设置成只启动一个:
job.setInputFormatClass(CombineTextInputFormat.class);

具体操作请参考教程:​​通过CombineTextInputFormat实现合并小文件(调优技能)​


作者简介:​邵奈一​

全栈工程师、市场洞察者、专栏编辑

| ​​公众号​​​ | ​​微信​​​ | ​​微博​​ | ​​简书​​ |

福利:

​邵奈一的技术博客导航​

​邵奈一​​ 原创不易,如转载请标明出处。