提交 44900b80 编写于 作者: A Arvid Heise 提交者: zhijiang

[hotfix][docs] Replace/fix links in checkpointing documents.

上级 5afb3a68
......@@ -365,6 +365,6 @@ programs, with minor exceptions:
- The DataSet API introduces special synchronized (superstep-based)
iterations, which are only possible on bounded streams. For details, check
out the [iteration docs]({{ site.baseurl }}/dev/batch/iterations.html).
out the [iteration docs]({% link dev/batch/iterations.md %}).
{% top %}
......@@ -32,7 +32,7 @@ any type of more elaborate operation.
In order to make state fault tolerant, Flink needs to **checkpoint** the state. Checkpoints allow Flink to recover state and positions
in the streams to give the application the same semantics as a failure-free execution.
The [documentation on streaming fault tolerance]({{ site.baseurl }}/learn-flink/fault_tolerance.html) describes in detail the technique behind Flink's streaming fault tolerance mechanism.
The [documentation on streaming fault tolerance]({% link learn-flink/fault_tolerance.md %}) describes in detail the technique behind Flink's streaming fault tolerance mechanism.
## Prerequisites
......@@ -72,7 +72,7 @@ Other parameters for checkpointing include:
This option cannot be used when a minimum time between checkpoints is defined.
- *externalized checkpoints*: You can configure periodic checkpoints to be persisted externally. Externalized checkpoints write their meta data out to persistent storage and are *not* automatically cleaned up when the job fails. This way, you will have a checkpoint around to resume from if your job fails. There are more details in the [deployment notes on externalized checkpoints]({{ site.baseurl }}/ops/state/checkpoints.html#externalized-checkpoints).
- *externalized checkpoints*: You can configure periodic checkpoints to be persisted externally. Externalized checkpoints write their meta data out to persistent storage and are *not* automatically cleaned up when the job fails. This way, you will have a checkpoint around to resume from if your job fails. There are more details in the [deployment notes on externalized checkpoints]({% link ops/state/checkpoints.md %}#externalized-checkpoints).
- *fail/continue task on checkpoint errors*: This determines if a task will be failed if an error occurs in the execution of the task's checkpoint procedure. This is the default behaviour. Alternatively, when this is disabled, the task will simply decline the checkpoint to the checkpoint coordinator and continue running.
......@@ -175,7 +175,7 @@ env.get_checkpoint_config().enable_unaligned_checkpoints()
### Related Config Options
Some more parameters and/or defaults may be set via `conf/flink-conf.yaml` (see [configuration]({{ site.baseurl }}/ops/config.html) for a full guide):
Some more parameters and/or defaults may be set via `conf/flink-conf.yaml` (see [configuration]({% link ops/config.md %}) for a full guide):
{% include generated/checkpointing_configuration.html %}
......@@ -184,7 +184,7 @@ Some more parameters and/or defaults may be set via `conf/flink-conf.yaml` (see
## Selecting a State Backend
Flink's [checkpointing mechanism]({{ site.baseurl }}/learn-flink/fault_tolerance.html) stores consistent snapshots
Flink's [checkpointing mechanism]({% link learn-flink/fault_tolerance.md %}) stores consistent snapshots
of all the state in timers and stateful operators, including connectors, windows, and any [user-defined state](state.html).
Where the checkpoints are stored (e.g., JobManager memory, file system, database) depends on the configured
**State Backend**.
......@@ -192,7 +192,7 @@ Where the checkpoints are stored (e.g., JobManager memory, file system, database
By default, state is kept in memory in the TaskManagers and checkpoints are stored in memory in the JobManager. For proper persistence of large state,
Flink supports various approaches for storing and checkpointing state in other state backends. The choice of state backend can be configured via `StreamExecutionEnvironment.setStateBackend(…)`.
See [state backends]({{ site.baseurl }}/ops/state/state_backends.html) for more details on the available state backends and options for job-wide and cluster-wide configuration.
See [state backends]({% link ops/state/state_backends.md %}) for more details on the available state backends and options for job-wide and cluster-wide configuration.
## State Checkpoints in Iterative Jobs
......@@ -207,7 +207,7 @@ Please note that records in flight in the loop edges (and the state changes asso
## Restart Strategies
Flink supports different restart strategies which control how the jobs are restarted in case of a failure. For more
information, see [Restart Strategies]({{ site.baseurl }}/dev/restart_strategies.html).
information, see [Restart Strategies]({% link dev/task_failure_recovery.md %}).
{% top %}
......@@ -32,7 +32,7 @@ Checkpoints make state in Flink fault tolerant by allowing state and the
corresponding stream positions to be recovered, thereby giving the application
the same semantics as a failure-free execution.
See [Checkpointing]({{ site.baseurl }}/dev/stream/state/checkpointing.html) for how to enable and
See [Checkpointing]({% link dev/stream/state/checkpointing.md %}) for how to enable and
configure checkpoints for your program.
## Retained Checkpoints
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册