- 18 6月, 2020 3 次提交
-
-
由 zhijiang 提交于
-
由 Yu Li 提交于
This closes #12683.
-
由 Roman Khachatryan 提交于
-
- 17 6月, 2020 26 次提交
-
-
由 Shengkai 提交于
The current timestamp format in JSON format is not SQL standard which uses RFC-3339. This commit changes the default behavior to parse/generate timestamp using SQL standard. Besides, it introduces an option "json.timestamp-format.standard" to have the ability to fallback to ISO standard. This closes #12661
-
由 Leonard Xu 提交于
This closes #12691
-
由 Yun Tang 提交于
[FLINK-18238][checkpoint] Broadcast CancelCheckpointMarker while executing checkpoint aborted by coordinator RPC In the case of aborting checkpoint RPC from CheckpointCoordinator, it will prevent executing the respective checkpoint which was already triggered before. But we also need to broadcast the CancelCheckpointMarker before exiting the execution , otherwise the downstream side would probably wait for barrier alignment until deadlock. This closes #12664.
-
由 Robert Metzger 提交于
This reverts commit 8ca388ca.
-
由 Robert Metzger 提交于
Revert "[FLINK-17800][roksdb] Support customized RocksDB write/read options and use RocksDBResourceContainer to get them" This reverts commit f1250625.
-
由 Robert Metzger 提交于
-
由 Dian Fu 提交于
[FLINK-18330][python][legal] Update the NOTICE file of flink-python module adding beam-runners-core-java and beam-vendor-bytebuddy This closes #12692.
-
由 Chesnay Schepler 提交于
-
由 Chesnay Schepler 提交于
-
由 Chesnay Schepler 提交于
-
由 Chesnay Schepler 提交于
-
由 Yichao Yang 提交于
This closes #12313
-
由 Yichao Yang 提交于
This closes #12311
-
由 Chesnay Schepler 提交于
-
由 Chesnay Schepler 提交于
-
由 Robert Metzger 提交于
Revert "[FLINK-18072][hbase] Fix HBaseLookupFunction can not work with new internal data structure RowData" This reverts commit b2711c5d because it broke master
-
由 Leonard Xu 提交于
This closes #12594
-
由 Jark Wu 提交于
[FLINK-18303][filesystem][hive] Fix Filesystem connector doesn't flush part files after rolling interval This commit introduces option 'sink.rolling-policy.check-interval' (default 1min) to control the frequency to check part file rollover.
-
由 Jark Wu 提交于
-
由 Jark Wu 提交于
This closes #12657
-
由 Jark Wu 提交于
-
由 Jark Wu 提交于
-
由 Jark Wu 提交于
-
由 Jark Wu 提交于
-
由 Jark Wu 提交于
-
由 Jark Wu 提交于
This closes #12632
-
- 16 6月, 2020 11 次提交
-
-
由 Allen Madsen 提交于
This closes #12663
-
由 Aljoscha Krettek 提交于
Before, it could happen that the new messages written in the test were not written to the new partition. Now we use explicit keys of which we know that they will hash to the second partition. You can verify this by changing "key" to "keya". The test will then fail deterministically.
-
由 Aljoscha Krettek 提交于
This was broken because the behaviour of the Kafka/ZooKeeper command line tools on Kafka 2.4.1 is slightly different: zookeeper_shell.sh does not print debug output to stderr as it did before. We change queryBrokerStatus() to instead consume stdout and check that we get valid information for the broker. The output of kafka-topics.sh now has a space between "PartitionCount:" and the partition count. Before we had "PartitionCount:2", now it's "PartitionCount: 2". We fix this by making the regex more lenient. This also splits the waiting on ZooKeeper/Kafka into two loops to better see which one we're blocking on.
-
由 godfreyhe 提交于
This closes #12643
-
由 godfreyhe 提交于
-
由 godfreyhe 提交于
-
由 godfreyhe 提交于
This closes #12654.
-
由 Chesnay Schepler 提交于
-
由 Chesnay Schepler 提交于
-
由 Chesnay Schepler 提交于
-
由 Chesnay Schepler 提交于
-