Hadoop实战(二),org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray
原创
©著作权归作者所有:来自51CTO博客作者nkgines的原创作品,请联系作者获取转载授权,否则将追究法律责任
场景描述
- 探索Hadoop开发经验,从而进一步理解Haddoop源码设计及核心思想
- 实验环境
- 打印HDFS目录信息
- 在HDFS中新建文件
- 写入中英文数据
代码实现
package hadoop;
import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
public class Get_HDFS_Files {
public static void hdfsFileTest() throws IOException {
String uri = "hdfs://192.168.1.110:9000/";
org.apache.hadoop.conf.Configuration config = new Configuration();
FileSystem fs = FileSystem.get(URI.create(uri), config);
// 列出hdfs根目录下所有文件和目录(仅根目录一层)
FileStatus[] status = fs.listStatus(new Path("/"));
for (FileStatus sta : status) {
System.out.println(sta);
}
// 在 hdfs 中新建文件并写入中英文数据
FSDataOutputStream os = fs.create((new Path("/test/test.log")));
os.write("hello hadoop ! 非常棒。".getBytes());
os.flush();
os.close();
// 读取文件数据并打印至控制台显示
InputStream is = fs.open(new Path("/test/test.log"));
IOUtils.copyBytes(is, System.out, 1024, true);
}
public static void main(String[] args) throws IOException {
hdfsFileTest();
FileStatus{path=hdfs://192.168.1.180:9000/hive; isDirectory=true; modification_time=1521966511910; access_time=0; owner=root; group=supergroup; permission=rwxr-xr-x; isSymlink=false}
FileStatus{path=hdfs://192.168.1.180:9000/opt; isDirectory=true; modification_time=1521965196888; access_time=0; owner=root; group=supergroup; permission=rwxr-xr-x; isSymlink=false}
FileStatus{path=hdfs://192.168.1.180:9000/tmp; isDirectory=true; modification_time=1521965267138; access_time=0; owner=root; group=supergroup; permission=rwx------; isSymlink=false}
hello hadoop ! 非常棒。
问题记录
- Window – Preferences – Workspace – Text file encoding – UTF-8 – Apply – OK
- 选中 所在项目( project ) – 右键 – Properties – Text file encoding – UTF-8 – Apply – OK
java.lang.UnsatisfiedLinkError:
org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(II[BI[BIILjava/lang/String;JZ)
References