Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
doujutun3207
flink
提交
5f0af06f
F
flink
项目概览
doujutun3207
/
flink
与 Fork 源项目一致
从无法访问的项目Fork
通知
24
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
F
flink
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
5f0af06f
编写于
2月 29, 2016
作者:
S
Stephan Ewen
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
[docs] Update readme with current feature list and streaming example
上级
405d2223
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
52 addition
and
15 deletion
+52
-15
README.md
README.md
+52
-15
未找到文件。
README.md
浏览文件 @
5f0af06f
# Apache Flink
Apache Flink is an open source platform for scalable batch and stream data processing. Flink supports batch and streaming analytics,
in one system. Analytical programs can be written in concise and elegant APIs in Java and Scala.
Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities.
Learn more about Flink at
[
http://flink.apache.org/
](
http://flink.apache.org/
)
### Features
*
A streaming-first runtime that supports both batch processing and data streaming programs
*
Elegant and fluent APIs in Java and Scala
*
A runtime that supports very high throughput and low event latency at the same time
*
Support for
*event time*
and
*out-of-order*
processing in the DataStream API, based on the
*Dataflow Model*
*
Flexible windowing (time, count, sessions, custom triggers) accross different time semantics (event time, processing time)
*
Fault-tolerance with
*exactly-once*
processing guarantees
*
Natural back-pressure in streaming programs.
*
Libraries for Graph processing (batch), Machine Learning (batch), and Complex Event Processing (streaming)
*
Built-in support for iterative programs (BSP) and in the DataSet (batch) API.
*
Custom memory management to for efficient and robust switching between in-memory and out-of-core data processing algorithms.
*
Compatibility layers for Apache Hadoop MapReduce and Apache Storm.
*
Integration with YARN, HDFS, HBase, and other components of the Apache Hadoop ecosystem.
### Streaming Example
```
scala
case
class
WordWithCount
(
word
:
String
,
count
:
Long
)
val
text
=
env
.
socketTextStream
(
host
,
port
,
'\n'
)
val
windowCounts
=
text
.
flatMap
{
w
=>
w
.
split
(
"\\s"
)
}
.
map
{
w
=>
WordWithCount
(
w
,
1
)
}
.
keyBy
(
"word"
)
.
timeWindow
(
Time
.
seconds
(
5
))
.
sum
(
"count"
)
windowCounts
.
print
()
```
### Batch Example
```
scala
case
class
WordWithCount
(
word
:
String
,
count
:
Int
)
case
class
WordWithCount
(
word
:
String
,
count
:
Long
)
val
text
=
env
.
readTextFile
(
path
)
...
...
@@ -16,16 +61,6 @@ val counts = text.flatMap { _.split("\\W+") }
counts
.
writeAsCsv
(
outputPath
)
```
These are some of the unique features of Flink:
*
Hybrid batch/streaming runtime that supports batch processing and data streaming programs.
*
Custom memory management to guarantee efficient, adaptive, and highly robust switching between in-memory and out-of-core data processing algorithms.
*
Flexible and expressive windowing semantics for data stream programs.
*
Built-in program optimizer that chooses the proper runtime operations for each program.
*
Custom type analysis and serialization stack for high performance.
Learn more about Flink at
[
http://flink.apache.org/
](
http://flink.apache.org/
)
## Building Apache Flink from Source
...
...
@@ -34,21 +69,23 @@ Prerequisites for building Flink:
*
Unix-like environment (We use Linux, Mac OS X, Cygwin)
*
git
*
Maven (
at least
version 3.0.4)
*
Maven (
we recommend
version 3.0.4)
*
Java 7 or 8
```
git clone https://github.com/apache/flink.git
cd flink
mvn clean package -DskipTests # this will take up to
5
minutes
mvn clean package -DskipTests # this will take up to
10
minutes
```
Flink is now installed in
`build-target`
*NOTE: Maven 3.3.x can build Flink, but will not properly shade away certain dependencies. Maven 3.0.3 creates the libraries properly.*
## Developing Flink
The Flink committers use IntelliJ IDEA and Eclipse IDE to develop the Flink codebase.
We recommend IntelliJ IDEA for developing projects that involve Scala code.
Minimal requirements for an IDE are:
*
Support for Java and Scala (also mixed projects)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录