# Flink Checkpoint to HDFS

## Introduction

In a distributed system like Apache Flink, checkpointing is essential for fault tolerance and exactly-once processing. In this tutorial, we will learn how to configure Flink to write checkpoints to HDFS (Hadoop Distributed File System).

## Steps to Configure Flink Checkpoint to HDFS

| Step | Description |
| --- | --- |
| 1 | Create a Hadoop cluster or setup HDFS standalone mode |
| 2 | Configure Flink to use Hadoop's FileSystem via `hadoop-aws` and `flink-s3-fs-hadoop` dependencies |
| 3 | Update Flink configuration to specify HDFS path for checkpoints |
| 4 | Start Flink job with checkpointing enabled |

### Step 1: Create a Hadoop Cluster or Setup HDFS Standalone Mode

If you already have a Hadoop cluster, you can skip this step. Otherwise, you can set up HDFS in standalone mode for testing purposes. Follow the Hadoop documentation for installation instructions.

### Step 2: Configure Flink Dependencies

Add necessary dependencies to your Flink project's `pom.xml`:

```xml

org.apache.hadoop
hadoop-aws
{version}


org.apache.flink
flink-s3-fs-hadoop
{version}

```

### Step 3: Update Flink Configuration

Update your Flink job configuration to specify the HDFS path for storing checkpoints. You can do this programmatically in your Flink job code or through `flink-conf.yaml`:

```java
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStateBackend(new FsStateBackend("hdfs://:/flink-checkpoints"));
env.enableCheckpointing(5000); // checkpoint every 5000 ms
```

### Step 4: Start Flink Job

Start your Flink job with the configured checkpointing parameters. Here's a simple example of a Flink job:

```java
DataStream stream = env.socketTextStream("localhost", 9999);
stream.map(String::toUpperCase)
.print();
env.execute("UppercaseJob");
```

## Conclusion

In this tutorial, we have learned how to enable Flink checkpointing to HDFS for fault tolerance. Checkpointing to HDFS ensures that the state of the Flink job is saved in a reliable storage system in case of failures. By following the steps outlined above, you can configure your Flink job to write checkpoints to HDFS seamlessly. Happy coding!