Hyperledger Fabric SDK for Java
- Documentation: https://hyperledger.github.io/fabric-gateway-java/
- GitHub repository: https://github.com/hyperledger/fabric-gateway-java/
For building Hyperledger Fabric blockchain client applications, you are strongly encouraged to use the high level API.
The information below is intended for contributors to this repository.
Introduction for contributors
The SDK provides a layer of abstraction on top of the wire-level protobuf based communication protocol used by client applications to interact with a Hyperledger Fabric blockchain network. It allows Java applications to manage the lifecycle of Hyperledger channels and user chaincode. The SDK also provides a means to execute user chaincode, query blocks and transactions on the channel, and monitor events on the channel.
The SDK acts on behalf of a particular User which is defined by the embedding application through the implementation
of the SDK's
Note, the SDK does not provide a means of persistence for the application defined channels and user artifacts on the client. This is left for the embedding application to best manage. Channels may be serialized via Java serialization in the context of a client. Channels deserialized are not in an initialized state. Applications need to handle migration of serialized files between versions.
The SDK also provides a client for Hyperledger's certificate authority. The SDK is however not dependent on this
particular implementation of a certificate authority. Other Certificate authority's maybe used by implementing the
This provides a summary of steps required to get you started with building and using the Java SDK. Please note that this is not the API documentation or a tutorial for the SDK, this will only help you familiarize to get started with the SDK if you are new in this domain.
|2.1||v2.1 release notes||Minor update|
|2.0||v2.0 release notes||
|1.4||None||Minor updates no Fabric changes|
|1.3||v1.3 release notes||
|1.2||v1.2 release notes||
|1.1||v1.1 release notes||
Checkout SDK from Github
git clone https://github.com/hyperledger/fabric-sdk-java.git cd fabric-sdk-java/
Production Java applications
For Java applications use the latest released version of the SDK v1.4.x releases:
<!-- https://mvnrepository.com/artifact/org.hyperledger.fabric-sdk-java/fabric-sdk-java --> <dependency> <groupId>org.hyperledger.fabric-sdk-java</groupId> <artifactId>fabric-sdk-java</artifactId> <version>1.4.7</version> </dependency>
For v2.0 work in progress use 2.0.0-SNAPSHOT builds
Work in progress 2.0.0 SNAPSHOT builds can be used by adding the following to your application's pom.xml
<repositories> <repository> <id>snapshots-repo</id> <url>https://oss.sonatype.org/content/repositories/snapshots</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> <dependencies> <!-- https://mvnrepository.com/artifact/org.hyperledger.fabric-sdk-java/fabric-sdk-java --> <dependency> <groupId>org.hyperledger.fabric-sdk-java</groupId> <artifactId>fabric-sdk-java</artifactId> <version>2.0.0-SNAPSHOT</version> </dependency> </dependencies>
Java and Node Chaincode environment
You may also need to on your v2.1 Fabric network docker deployment explicitly pull the Java and Node chaincode environments for now.
docker pull hyperledger-fabric.jfrog.io/fabric-nodeenv:amd64-2.1.0-stable&&docker tag hyperledger-fabric.jfrog.io/fabric-nodeenv:amd64-2.1.0-stable hyperledger/fabric-nodeenv:amd64-latest&&docker tag hyperledger-fabric.jfrog.io/fabric-nodeenv:amd64-2.1.0-stable hyperledger/fabric-nodeenv
docker pull hyperledger-fabric.jfrog.io/fabric-javaenv:amd64-2.1.0-stable&&docker tag hyperledger-fabric.jfrog.io/fabric-javaenv:amd64-2.1.0-stable hyperledger/fabric-javaenv:amd64-latest&&docker tag hyperledger-fabric.jfrog.io/fabric-javaenv:amd64-2.1.0-stable hyperledger/fabric-javaenv
Known limitations and restrictions
- TCerts are not supported: JIRA FAB-1401
SDK depends on few third party libraries that must be included in your classpath when using the JAR file. To get a list of dependencies, refer to pom.xml file or run
mvn dependency:tree or
mvn dependency:analyze-report will produce a report in HTML format in target directory listing all the dependencies in a more readable format.
To build this project, the following dependencies must be met
- JDK 1.8 or above
- Apache Maven 3.5.0
To run the integration tests Fabric and Fabric CA is needed which require
- Docker 18.03
- Docker compose 1.21.2
Using the SDK
Setting Up Eclipse
If you want to get started using the Fabric Java SDK with Eclipse, refer to the instructions at: ./docs/EclipseSetup.md
Once your JAVA_HOME points to your installation of JDK 1.8 (or above) and JAVA_HOME/bin and Apache maven are in your PATH, issue the following command to build the jar file:
mvn install -DskipTests
if you don't want to run the unit tests
Running the unit tests
To run the unit tests, please use
mvn install which will run the unit tests and build the jar file.
Many unit tests will test failure condition's resulting in exceptions and stack traces being displayed. This is not an indication of failure!
[INFO] BUILD SUCCESS At the end is usually a very reliable indication that all tests have passed successfully!
Running the integration tests
The script below both sets up the test environment and runs the tests.
End to end test scenario
Following the below integration tests/example code shows almost all that the SDK can do. To learn the SDK you must have some understanding first of Fabric Hyperledger. Then it's best to study the integrations tests and better yet work with them in a debugger to follow the code ( a live demo ). Start first with End2endIT.java and then End2endAndBackAgainIT.java samples before exploring the other samples. Then once you understand them you can cut and paste from there to your own application. ( the code is done for you! )
Note These samples are for testing, validating your environment and showing how to use the APIs. Most show a simple balance transfer. They are not meant to represent best practices in design or use of chaincode or the use of the SDK.
|Integration Test||Summary and notes|
End to end test environment
The test defines one Fabric orderer and two organizations (peerOrg1, peerOrg2), each of which has 2 peers, one fabric-ca service.
Certificates and other cryptography artifacts
Fabric requires that each organization has private keys and certificates for use in signing and verifying messages going to and from clients, peers and orderers. Each organization groups these artifacts in an MSP (Membership Service Provider) with a corresponding unique MSPID .
Furthermore, each organization is assumed to generate these artifacts independently. The fabric-ca project is an example of such a certificate generation service.
Fabric also provides the
cryptogen tool to automatically generate all cryptographic artifacts needed for the end to end test.
In the directory src/test/fixture/sdkintegration/e2e-2Orgs/channel
The command used to generate end2end
build/bin/cryptogen generate --config crypto-config.yaml --output=crypto-config
cryptogen generate --config crypto-config.yaml --output=v1.1/crypto-config
For ease of assigning ports and mapping of artifacts to physical files, all peers, orderers, and fabric-ca are run as Docker containers controlled via a docker-compose configuration file.
The files used by the end to end are:
- src/test/fixture/sdkintegration/e2e-2Orgs/vX.0 (everything needed to bootstrap the orderer and create the channels)
src/test/fixture/sdkintegration/e2e-2Orgs/vX.0crypto-config (as-is. Used by
docker-composeto map the MSP directories)
The end to end test case artifacts are stored under the directory src/test/fixture/sdkintegration/e2e-2Org/channel .
TLS connection to Orderer and Peers
IBM Java needs the following properties defined to use TLS 1.2 to get an HTTPS connections to Fabric CA.
Currently, the pom.xml is set to use netty-tcnative-boringssl for TLS connection to Orderer and Peers, however, you can change the pom.xml (uncomment a few lines) to use an alternative TLS connection via ALPN.
TLS Environment for SDK Integration Tests
The SDK Integration tests can be enabled by adding before the ./fabric restart the follow as:
ORG_HYPERLEDGER_FABRIC_SDKTEST_INTEGRATIONTESTS_TLS=true ORG_HYPERLEDGER_FABRIC_SDKTEST_INTEGRATIONTESTS_CA_TLS=--tls.enabled ./fabric.sh restart
Then run the Integration tests with:
ORG_HYPERLEDGER_FABRIC_SDKTEST_INTEGRATIONTESTS_TLS=true mvn clean install -DskipITs=false -Dmaven.test.failure.ignore=false javadoc:javadoc
Chaincode endorsement policies
You create a policy using a Fabric tool ( an example is shown in JIRA issue FAB-2376) and give it to the SDK either as a file or a byte array. The SDK, in turn, will use the policy when it creates chaincode instantiation requests.
To input a policy to the SDK, use the ChaincodeEndorsementPolicy class.
For testing purposes, there are 2 policy files in the src/test/resources directory
- policyBitsAdmin ( which has policy AND(DEFAULT.admin) meaning 1 signature from the DEFAULT MSP admin' is required )
- policyBitsMember ( which has policy AND(DEFAULT.member) meaning 1 signature from a member of the DEFAULT MSP is required )
and one file in the src/test/fixture/sdkintegration/e2e-2Orgs/channel directory specifically for use in the end to end test scenario
- members_from_org1_or_2.policy ( which has policy OR(peerOrg1.member, peerOrg2.member) meaning 1 signature from a member of either organizations peerOrg1, PeerOrg2 is required)
Alternatively, you can also use ChaincodeEndorsementPolicy class by giving it a YAML file that has the policy defined in it.
See examples of this in the End2endIT testcases that use src/test/fixture/sdkintegration/chaincodeendorsementpolicy.yaml
The file chaincodeendorsementpolicy.yaml has comments that help understand how to create these policies. The first section
lists all the signature identities you can use in the policy. Currently, only ROLE types are supported.
The policy section is comprised of
signed-by elements. Then n-of (
2-of) require that many (
n) in that
section to be true. The
signed-by references an identity in the identities section.
Channel creation artifacts
Channel configuration files and orderer bootstrap files ( see directory src/test/fixture/sdkintegration/e2e-2Orgs ) are needed when creating a new channel.
This is created with the Hyperledger Fabric
configtxgen tool. This must be run after
cryptogen and the directory you're
running in must have a generated
build/bin/configtxgen tool is not present run
For v1.0 integration test the commands are:
- build/bin/configtxgen -outputCreateChannelTx foo.tx -profile TwoOrgsChannel -channelID foo
- build/bin/configtxgen -outputCreateChannelTx bar.tx -profile TwoOrgsChannel -channelID bar
For v1.1 integration the commands use the v11 profiles in configtx.yaml.
You need to for now copy the configtx.yaml in
e2e-20orgs to the v1.1 directory and run from there:
- configtxgen -outputBlock orderer.block -profile TwoOrgsOrdererGenesis_v11
- configtxgen -outputCreateChannelTx bar.tx -profile TwoOrgsChannel_v11 -channelID bar
- configtxgen -outputCreateChannelTx foo.tx -profile TwoOrgsChannel_v11 -channelID foo
For v1.2 integration the commands use the v12 profiles in configtx.yaml.
- configtxgen --configPath . -outputBlock orderer.block -profile TwoOrgsOrdererGenesis_v12
- configtxgen --configPath . -outputCreateChannelTx bar.tx -profile TwoOrgsChannel_v12 -channelID bar
- configtxgen --configPath . -outputCreateChannelTx foo.tx -profile TwoOrgsChannel_v12 -channelID foo
This should produce in the
v1.2directory: bar.tx,foo.tx, orderer.block
For v1.3 and 1.4 integration, cd to the
and execute the following commands:
- configtxgen --configPath . -outputBlock orderer.block -profile TwoOrgsOrdererGenesis_v13
- configtxgen --configPath . -outputCreateChannelTx foo.tx -profile TwoOrgsChannel_v13 -channelID foo
- configtxgen --configPath . -outputCreateChannelTx bar.tx -profile TwoOrgsChannel_v13 -channelID bar
For v2.1 integration, cd to the
- configtxgen --configPath . -outputCreateChannelTx v2channel.tx -profile TwoOrgsChannel_v20 -channelID v2channel
- configtxgen --configPath . -outputBlock orderer.block -profile TwoOrgsOrdererGenesis_v20 -channelID systemordererchannel
This should produce the following files in the same directory: orderer.block, foo.tx, and bar.tx
Note: The above describes how this was done. If you redo this there are private key files which are produced with unique names which won't match what's expected in the integration tests. One example of this is the docker-compose.yaml (search for _sk)
GO Lang chaincode
Go lang chaincode dependencies must be contained in vendor folder. For an explanation of this see Vendor folder explanation
Basic Troubleshooting and frequently asked questions:
Where can I find the Javadoc?
Look in the Maven repository for the release in question there should be a file fabric-sdk-java-<release>-javadoc.jar
For SNAPSHOT builds look in Sonatype repository Find the release <release>-SNAPSHOT directory then search for the latest fabric-sdk-java-<release>-<latest timestamp>-javadoc.jar
Is Android supported?
Is there an API to query for all channels that exist?
Should an application create more than one HFClient?
There should be no need to do that in a single application. All the SDK requests are threadsafe. The user context set on the client can be on all requests overridden by setting the user context on that specific request.
Idemix users or Idemix test cases (IdemixIdentitiesTest) just seems to hang or take forever.
Most likely this is running on a virtual machine that does not have sufficient entropy. Google for adding entropy on virtual machines or look at virtual machines entropy If linux try installing rng-tools package as this suggests.
Firewalls, load balancers, network proxies
These can sometimes silently kill a network connections and prevent them from auto reconnecting. To fix this look at
adding to Peers and Orderer's connection properties:
grpc.NettyChannelBuilderOption.keepAliveWithoutCalls. Examples of this are in End2endIT.java
Missing protobuf classes.
Please re-read this file doing exactly the steps to run all the tests. They can't be missing if the tests pass.
grpc message frame size exceeds maximum
The message being returned from the fabric server is too large for the default grpc frame size.
On the Peer or Orderer add the property
See End2endIT's constructChannel
Configuration and setting default values - timeouts etc
What's difference between joining and adding a peer to a channel?
You only ever join a peer belonging to your own organization to a channel once at the beginning. You would only add peers from other organizations or peers of your own organization you've already joined like when recreating the channel SDK object.
Transaction sent to orderer results in future with exception validation code: xxx Where can I find what that means?
See Fabric protobuf protos/peer/transaction.proto's TxValidationCode
java.security.InvalidKeyException: Illegal key size
If you get this error, this means your JDK does not capable of handling unlimited strength crypto algorithms. To fix this issue, You will need to download the JCE libraries for your version of JDK. Please follow the instructions here to download and install the JCE for your version of the JDK.
Communicating with developers and fellow users.
Join the fabric-sdk-java channel.
If your issue is with building Fabric development environment please discuss this on rocket.chat's #fabric-dev-env channel.
JIRA Fields should be:
- Bug or New Feature
- Fix Versions
Pleases provide as much information that you can with the issue you're experiencing: stack traces logs.
Please provide the output of java -XshowSettings:properties -version
Logging for the SDK can be enabled with setting environment variables:
ORG_HYPERLEDGER_FABRIC_SDK_DIAGNOSTICFILEDIR=<full path to directory> # dumps protobuf and diagnostic data. Can be produce large amounts of data!
Fabric debug is by default enabled in the SDK docker-compose.yaml file with
On peers: CORE_LOGGING_LEVEL=DEBUG
Fabric CA by starting command have the -d parameter.
Upload full logs to the JIRA not just where the issue occurred if possible
This work is licensed under a Creative Commons Attribution 4.0 International License.