- 06 1月, 2020 1 次提交
-
-
由 Sijie Guo 提交于
-
- 10 12月, 2019 1 次提交
-
-
由 Sijie Guo 提交于
*Motivation* Bump the development version to 2.6.0-SNAPSHOT
-
- 25 6月, 2019 1 次提交
-
-
由 lipenghui 提交于
-
- 15 5月, 2019 1 次提交
-
-
由 Boyang Jerry Peng 提交于
* pulsar-io connectors use secrets * remove unnecessary files * fix pom * add license headers
-
- 11 4月, 2019 1 次提交
-
-
由 Fangbin Sun 提交于
* Add a Pulsar IO connector for InfluxDB sink. * Add sensitive
-
- 02 4月, 2019 1 次提交
-
-
由 Fangbin Sun 提交于
* Add a Pulsar IO connector for Solr sink. * Remove empty lines and add a test scope in pom * Add license header in pom
-
- 12 3月, 2019 1 次提交
-
-
由 Fangbin Sun 提交于
### Motivation This PR provides a built-in Redis sink Connector, in order to cache messages in Redis [key-value] pairs. This will effectively make Redis a caching system, which other applications can access to get the latest value. ### Modifications Add a new sub-module in the `pulsar-io` module. ### Verifying this change This change can be verified as follows: * deploy the redis sink connector with configuration file containing the following fields: ``` configs: redisHosts: "localhost:6379" redisPassword: "redis@123" redisDatabase: "1" clientMode: "Standalone" operationTimeout: "3000" batchSize: "100" ``` * start a redis instance with auth * send messages with `NotNull` key/value in the topic declared when deploying the connector * check in Redis if the message's key-value pairs have been stored in above database ### Documentation ``` # Submit a Redis Sink $ bin/pulsar-admin sink create --tenant public --namespace default --name redis-test-sink --sink-type redis --sink-config-file examples/redis-sink.yaml --inputs test_redis # List Sink $ bin/pulsar-admin sink list --tenant public --namespace default # Get Sink Info $ bin/pulsar-admin sink get --tenant public --namespace default --name redis-test-sink # Get Sink Status $ bin/pulsar-admin sink status --tenant public --namespace default --name redis-test-sink # Delete the Redis Sink $ bin/pulsar-admin sink delete --tenant public --namespace default --name redis-test-sink ```
-
- 04 3月, 2019 1 次提交
-
-
由 tuteng 提交于
* Support TLS authentication and authorization in standalone mode * Compile success To do: test channel, sink and source of flume * Add conf file * Add sink and source folder Move file to folder Add source Compile success To do -> test source * test flume source paas * Add config file and test case * Add test and update pom.xml To do add test of source and sink * Add unit tests * Add test case and test pass To do test source * Add license Add test source of pulsar * Handle if blockingQueue is null * Move LOG to log * Format code * Add sinkClass in pulsar-io.yaml * Add comment * Default is false * Modify pom file of flume * Format pom.xml file * Move pom version to 2.4.0-SNAPSHOT
-
- 16 2月, 2019 1 次提交
-
-
由 Matteo Merli 提交于
-
- 13 2月, 2019 1 次提交
-
-
由 Bruno Bonnin 提交于
### Motivation Provides a builtin MongoDB Connector, in order to ease the storage of JSON formated message in MongoDB. It's a sink connector. ### Modifications Add a new sub-module in the `pulsar-io`module. ### Verifying this change This change added tests and can be verified as follows: * deploy the connector with configuration file containing the following fields: ``` configs: mongoUri: mongodb://hostname:port database: pulsar collection: messages ``` * start a mongodb instance * send messages in the topic declared when deploying the connector * check in MongoDB if the messages have been stored in the collection `messages`
-
- 01 2月, 2019 1 次提交
-
-
由 wpl 提交于
Supporting the ability to write data to HBase sink #3290
-
- 01 1月, 2019 1 次提交
-
-
由 Boyang Jerry Peng 提交于
* add sink and source prometheus stats * fixing stuff
-
- 28 12月, 2018 1 次提交
-
-
由 David Kjerrumgaard 提交于
### Motivation Added a Pulsar IO connector for consuming files from the local filesystem ### Modifications Added a new module to the pulsar-io module that includes the Pulsar file connector and its associated classes & tests ### Result After your change, users will be able to consume files from the local filesystem, and have the contents directly published to a Pulsar topic.
-
- 16 12月, 2018 2 次提交
-
-
由 David Kjerrumgaard 提交于
### Motivation A user was attempting to use the existing HDFS connector to connect to a 2.x version of HDFS, but the current connector only supported 3.x version of HDFS. ### Modifications To address this issue, we renamed the current HDFS connector to HDFS3, and created a new 2.x compatible connector named HDFS2. The code in both of these are nearly identical with 2 notable exceptions. First and foremost, they both use different versions of the Hadoop-client library. And secondly, the HDFS2 version creates the FSDataOutputStream object directly, whereas the HDFS3 version leverages the FSDataOutputStreamBuilder class for this purpose, as it is the preferred method going forward. ### Result There will be be support for connecting to both 2.x and 3.x version of HDFS. However, there MAY BE some library conflicts in the released jar due to the different versions of the same library in the different modules. Hopefully the NAR packaging will address this.
-
由 Eren Avsarogullari 提交于
### Motivation Netty is NIO client server framework by supporting asynchronous event-driven communication and custom protocol implementation. Ref: https://netty.io/ This PR proposes Pulsar-IO Netty Source Connector by aiming the Tcp clients. It enables an embedded Tcp Server to listen incoming Tcp messages and writes them to user-defined Pulsar topic. There are also other potential use-cases(Tcp, Http and Udp messages) for this module as follows: - Tcp Client (Pulsar-IO Sink): It can listen Pulsar messages and can write to remote Tcp Server. - Http Server and Client (Pulsar-IO Source and Sink) - Udp Server and Client (Pulsar-IO Source and Sink) This is following PR with #3095. Module has been rolled to Pulsar-IO in the light of the previous discussion. ### Modifications 1- `NettyTcpServer`: Initializes an embedded Tcp Server to listen incoming Tcp Requests 2- `NettyTcpServerHandler`: Inbound Channel Handler to handle incoming Tcp Requests 3- `NettyChannelInitializer`: Channel Initializer to support different types of decoders and handlers 4- `NettyTcpSource`: A push-based Source to listen Tcp messages and write them to user-defined Pulsar topic 5- `NettyTcpSourceConfig`: To support user-defined config for both Map and Yaml. 6- UT Coverages
-
- 26 11月, 2018 2 次提交
-
-
由 Sijie Guo 提交于
*Motivation* Currently all io connectors lack example yaml files. Manually write those files is error-prone. We need a programmable way that automatically generate example connector yaml files. *Changes* - introduce annotations for documenting connector yaml files. - provide a generator to generate yaml files - provide a shell script to run generator - when building io package, generate yaml configs
-
由 tuteng 提交于
### Motivation support alibaba canal https://github.com/alibaba/canal/wiki ### Modifications Integrated canal client ### Result support binlog sync to pulsar use python pulsar-client to consume ``` import pulsar client = pulsar.Client('pulsar://localhost:6650') consumer = client.subscribe('my-topic', subscription_name='my-sub') while True: msg = consumer.receive() print("Received message: '%s'" % msg.data()) consumer.acknowledge(msg) client.close() ``` ``` output: Received message: '[{"data":null,"database":"testdb","es":1542446501000,"id":44,"isDdl":true,"mysqlType":null,"old":null,"sql":"CREATE TABLE `users320` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(50) DEFAULT NULL, `extra` varchar(50) DEFAULT NULL, PRIMARY KEY (`id`), KEY `ix_users_name` (`name`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8","sqlType":null,"table":"users320","ts":1542446501114,"type":"CREATE"}]' ```
-
- 11 10月, 2018 1 次提交
-
-
由 joefk 提交于
-
- 04 10月, 2018 1 次提交
-
-
由 Jia Zhai 提交于
### Motivation add kafka source connect adaptor for debezium This will save data from kafka source connect into Pulsar. ### Modifications add class and test ### Result ut pass
-
- 24 9月, 2018 1 次提交
-
-
由 Jia Zhai 提交于
### Motivation add PulsarDatabaseHistory for debezium ### Modifications add PulsarDatabaseHistory for debezium and test for it. ### Result ut pass
-
- 21 9月, 2018 1 次提交
-
-
由 Matteo Merli 提交于
-
- 11 9月, 2018 1 次提交
-
-
由 David Kjerrumgaard 提交于
### Motivation Added a sink connector that writes JSON documents into ElasticSearch ### Modifications Added new pulsar-io module and associated integration tests ### Result An ElasticSearch sink connector will be available for use.
-
- 07 9月, 2018 1 次提交
-
-
由 David Kjerrumgaard 提交于
* Added HDFS Sink * Fixed issues identified during PR review * Fixed comment * Added HDFS Container to externalServices * Ignoring HdfsSink test for now * Removed HDFS Container to externalServices * Fixed ASL licensing * Fixed compile errors * Added HDFS to SinkType Enum
-
- 06 9月, 2018 1 次提交
-
-
由 Ali Ahmed 提交于
-
- 05 9月, 2018 1 次提交
-
-
由 Jia Zhai 提交于
### Motivation This change is trying to add a basic JDBC sink connector. ### Modifications Add the jdbc module to the pulsar-io sub-module. Add unit test and integration test for it. ### Result ut and integration test pass. Master Issue: #2442
-
- 01 9月, 2018 1 次提交
-
-
由 Boyang Jerry Peng 提交于
* Initial SQL documentation and DataGeneratorSource * adding license header * adding to sidebar * improving documentation * adding to sql getting started * small fix * adding data generator to connector bin distribution * improve sql worker cli * adding data generator to io pom * modifying launch args
-
- 28 8月, 2018 1 次提交
-
-
由 Ali Ahmed 提交于
-
- 26 6月, 2018 1 次提交
-
-
由 Sijie Guo 提交于
Signed-off-by: NSijie Guo <sijie@apache.org>
-
- 08 6月, 2018 1 次提交
-
-
由 Rajan Dhabalia 提交于
* Introduce kinesis sink on function add pulsarSinkE2E test * remove kinesis test and dep
-
- 09 5月, 2018 1 次提交
-
-
由 Luc Perkins 提交于
* begin renaming process * more class and directory renames * move Record classes into pulsar-io * apply rename to Maven configs * rename java imports * update versions in maven configs * add missing imports * remove Message class from pulsar-io * add missing import * add Reflections util import * add Utils import * add missing Record import * supply missing Record imports
-
- 03 5月, 2018 1 次提交
-
-
由 Matteo Merli 提交于
-
- 13 4月, 2018 2 次提交
-
-
由 Sanjeev Kulkarni 提交于
* Added Kafka Source and Kafka Sink to Pulsar Connect * Standardize on kafka versions for compat and connect
-
由 Sanjeev Kulkarni 提交于
-
- 11 4月, 2018 2 次提交
-
-
由 Sanjeev Kulkarni 提交于
-
由 Sanjeev Kulkarni 提交于
* Added Cassandra Sink Connector
-
- 10 4月, 2018 1 次提交
-
-
由 Sanjeev Kulkarni 提交于
* Added Pulsar Connect interfaces that define connectors that push data into pulsar and take data from pulsar * Added Twitter connector * Added hbc core version mapping * Addressed comments * Fixed build * Fixed license header
-
- 14 2月, 2018 1 次提交
-
-
由 Matteo Merli 提交于
-
- 29 11月, 2017 1 次提交
-
-
由 nkurihar 提交于
-
- 06 10月, 2017 1 次提交
-
-
由 Rajan Dhabalia 提交于
-
- 17 9月, 2017 1 次提交
-
-
由 Matteo Merli 提交于
-