HDFS实战之删除文件

一.运行环境

  • 虚拟机:centos 6.5
  • 物理机:windows 10
  • apache-maven-3.5.3
  • hadoop-2.6.4
  • jdk1.8.0_161

二.代码

1.源代码如下:

package shen.liu.hdfs.practice;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;

public class HDFSFileDelete {
    public static void main(String args[]) throws IOException {
        if(args.length != 1) {
            System.out.println("parameter error");
        }else {
            Configuration conf = new Configuration();
            FileSystem fs = FileSystem.get(conf);
            //FileNotFoundException

            Path hdfs = new Path(args[0]);
            /*Names a file or directory in a FileSystem. 
             * Path strings use slash as the directory separator. 
             * A path string is absolute if it begins with a slash(斜线)
             */
            boolean b  ; 
            b = fs.delete(hdfs,true);//删除该路径上的文件,如果该文件是一个目录的话,则递归删除该目录
            /**Parameters:
                 * f the path to delete.
                 * recursive if path is a directory and set to true, the directory is deleted else throws an exception. 
                 * In case of a file the recursive can be set to either true or false. 
                 * Returns:true if delete is successful else false. 
             * Throws:IOException
             */
            if(b) {
                System.out.println("The file or directory in"+hdfs.toString()+" have been deleted!");
            }else {
                System.out.println("Sorry,the file have not been deleted!");
            }
        }
    }
}

2.执行效果如下:

文件路径:/output
是否是目录true
文件权限:rwxr-xr-x

3.但是如果将这个代码毫无修改的使用在windows平台上运行,则会报错:

Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS: hdfs://192.168.211.3:9000/output, expected: file:///
    at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
    at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:532)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:750)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:527)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
    at utils.HDFSUtils.browseHdfsFile(HDFSUtils.java:26)
    at HDFSTest.main(HDFSTest.java:6)

可以看到错误:Wrong FS: hdfs://192.168.211.3:9000/output, expected: file:///是由于文件系统不能匹配导致的,所以我们需要设置文件系统为hdfs。【这个是在配置文件中设置】
添加如下代码:conf.set("fs.defaultFS", "hdfs://192.168.211.3:9000");将代码中的ip地址替换为你自己的ip地址即可。