README.md 18.9 KB
Newer Older
1
**Concourse Pipeline** [![Concourse Build Status](https://prod.ci.gpdb.pivotal.io/api/v1/teams/main/pipelines/gpdb_master/badge)](https://prod.ci.gpdb.pivotal.io/teams/main/pipelines/gpdb_master) |
2
**Travis Build** [![Travis Build Status](https://travis-ci.org/greenplum-db/gpdb.svg?branch=master)](https://travis-ci.org/greenplum-db/gpdb)
3 4 5

----------------------------------------------------------------------

6
![Greenplum](logo-greenplum.png)
7

8 9
Greenplum Database (GPDB) is an advanced, fully featured, open
source data warehouse, based on PostgreSQL. It provides powerful and rapid analytics on
10 11 12 13 14 15 16
petabyte scale data volumes. Uniquely geared toward big data
analytics, Greenplum Database is powered by the world’s most advanced
cost-based query optimizer delivering high analytical query
performance on large data volumes.

The Greenplum project is released under the [Apache 2
license](http://www.apache.org/licenses/LICENSE-2.0). We want to thank
17
all our past and present community contributors and are really interested in
18 19 20 21 22 23 24 25 26 27 28
all new potential contributions. For the Greenplum Database community
no contribution is too small, we encourage all types of contributions.

## Overview

A Greenplum cluster consists of a __master__ server, and multiple
__segment__ servers. All user data resides in the segments, the master
contains only metadata. The master server, and all the segments, share
the same schema.

Users always connect to the master server, which divides up the query
29 30 31
into fragments that are executed in the segments, and collects the results.

More information can be found on the [project website](https://greenplum.org/).
32

33
## Building Greenplum Database with GPORCA
34 35
GPORCA is a cost-based optimizer which is used by Greenplum Database in
conjunction with the PostgreSQL planner.  It is also known as just ORCA,
36
and Pivotal Optimizer. The code for GPORCA resides in a
37 38
separate repository, below are steps outlining how to build Greenplum with
GPORCA enabled.
39

40 41
### Installing dependencies (for macOS developers)
Follow [these macOS steps](README.macOS.md) for getting your system ready for GPDB
42

43
### Installing dependencies (for Linux developers)
G
Goutam Tadi 已提交
44
Follow [appropriate linux steps](README.linux.md) for getting your system ready for GPDB
45 46 47

<a name="buildOrca"></a>
### Build the optimizer
48 49
#### Automatically with Conan dependency manager

50 51
```bash
cd depends
52 53 54
./configure
make
make install_local
55 56
cd ..
```
57

58
#### Manually
59
Follow the directions in the [GPORCA README](https://github.com/greenplum-db/gporca).
60

61
**Note**: Get the latest GPORCA `git pull --ff-only` if you see an error message like below:
62

63
    checking Checking ORCA version... configure: error: Your ORCA version is expected to be 2.33.XXX
64

65
### Build the database
66

67
```
X
Xin Zhang 已提交
68
# Configure build environment to install at /usr/local/gpdb
69
./configure --with-perl --with-python --with-libxml --with-gssapi --prefix=/usr/local/gpdb
70

X
Xin Zhang 已提交
71
# Compile and install
72 73
make -j8
make -j8 install
74

X
Xin Zhang 已提交
75 76
# Bring in greenplum environment into your running shell
source /usr/local/gpdb/greenplum_path.sh
77

T
Todd Sedano 已提交
78
# Start demo cluster
T
Todd Sedano 已提交
79
make create-demo-cluster
T
Todd Sedano 已提交
80 81
# (gpdemo-env.sh contains __PGPORT__ and __MASTER_DATA_DIRECTORY__ values)
source gpAux/gpdemo/gpdemo-env.sh
82 83
```

84 85
The directory and the TCP ports for the demo cluster can be changed on the fly.
Instead of `make cluster`, consider:
86 87 88 89 90 91 92 93

```
DATADIRS=/tmp/gpdb-cluster MASTER_PORT=15432 PORT_BASE=25432 make cluster
```

The TCP port for the regression test can be changed on the fly:

```
94
PGPORT=15432 make installcheck-world
95 96
```

97
Once build and started, run `psql` and check the GPOPT (e.g. GPORCA) version:
98

99 100 101
```
select gp_opt_version();
```
102

103
To turn GPORCA off and use Postgres planner for query optimization:
X
Xin Zhang 已提交
104
```
105
set optimizer=off;
X
Xin Zhang 已提交
106 107
```

108 109 110 111 112
If you want to clean all generated files
```
make distclean
```

113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135
## Running tests

* The default regression tests

```
make installcheck-world
```

* The top-level target __installcheck-world__ will run all regression
  tests in GPDB against the running cluster. For testing individual
  parts, the respective targets can be run separately.

* The PostgreSQL __check__ target does not work. Setting up a
  Greenplum cluster is more complicated than a single-node PostgreSQL
  installation, and no-one's done the work to have __make check__
  create a cluster. Create a cluster manually or use gpAux/gpdemo/
  (example below) and run the toplevel __make installcheck-world__
  against that. Patches are welcome!

* The PostgreSQL __installcheck__ target does not work either, because
  some tests are known to fail with Greenplum. The
  __installcheck-good__ schedule in __src/test/regress__ excludes those
  tests.
136

137 138 139 140 141 142 143 144
* When adding a new test, please add it to one of the GPDB-specific tests,
  in greenplum_schedule, rather than the PostgreSQL tests inherited from the
  upstream. We try to keep the upstream tests identical to the upstream
  versions, to make merging with newer PostgreSQL releases easier.

## Alternative Configurations

### Building GPDB without GPORCA
145

146 147
Currently, GPDB is built with GPORCA by default so latest GPORCA libraries and headers need
to be available in the environment. [Build and Install](#buildOrca) the latest GPORCA.
148

149
If you want to build GPDB without GPORCA, configure requires `--disable-orca` flag to be set.
X
Xin Zhang 已提交
150
```
151 152 153 154 155
# Clean environment
make distclean

# Configure build environment to install at /usr/local/gpdb
./configure --disable-orca --with-perl --with-python --with-libxml --prefix=/usr/local/gpdb
X
Xin Zhang 已提交
156 157
```

158
### Building GPDB with PXF
159

160
PXF is an extension framework for GPDB to enable fast access to external hadoop datasets.
F
Francisco Guerrero 已提交
161
Refer to [PXF extension](gpcontrib/pxf/README.md) for more information.
162

阿福Chris's avatar
阿福Chris 已提交
163
Currently, GPDB is built with PXF by default (--enable-pxf is on).
164 165
In order to build GPDB without pxf, simply invoke `./configure` with additional option `--disable-pxf`.
PXF requires curl, so `--enable-pxf` is not compatible with the `--without-libcurl` option.
S
Shivram Mani 已提交
166

167
### Building GPDB with gpperfmon enabled
168 169 170 171 172 173 174 175 176

gpperfmon tracks a variety of queries, statistics, system properties, and metrics.
To build with it enabled, change your `configure` to have an additional option
`--enable-gpperfmon`

See [more information about gpperfmon here](gpAux/gpperfmon/README.md)

gpperfmon is dependent on several libraries like apr, apu, and libsigar

H
Hubert Zhang 已提交
177 178 179 180 181 182
### Building GPDB with Python3 enabled

GPDB supports Python3 with plpython3u UDF

See [how to enable Python3](src/pl/plpython/README.md) for details.

183 184 185 186 187

### Building GPDB client tools on Windows

See [Building GPDB client tools on Windows](README.windows.md) for details.

188
## Development with Docker
189 190 191

See [README.docker.md](README.docker.md).

192
We provide a docker image with all dependencies required to compile and test
193 194
GPDB [(See Usage)](src/tools/docker/README.md). You can view the dependency dockerfile at `./src/tools/docker/centos6-admin/Dockerfile`.
The image is hosted on docker hub at `pivotaldata/gpdb-dev:centos6-gpadmin`.
195

196 197
A quickstart guide to Docker can be found on the [Pivotal Engineering Journal](http://engineering.pivotal.io/post/docker-gpdb/).

198 199 200 201
## Development with Vagrant

There is a Vagrant-based [quickstart guide for developers](src/tools/vagrant/README.md).

202 203 204 205 206 207 208 209 210 211 212 213 214 215
## Code layout

The directory layout of the repository follows the same general layout
as upstream PostgreSQL. There are changes compared to PostgreSQL
throughout the codebase, but a few larger additions worth noting:

* __gpMgmt/__

  Contains Greenplum-specific command-line tools for managing the
  cluster. Scripts like gpinit, gpstart, gpstop live here. They are
  mostly written in Python.

* __gpAux/__

216 217
  Contains Greenplum-specific release management scripts, and vendored
  dependencies. Some additional directories are submodules and will be
218 219
  made available over time.

220 221 222 223 224
* __gpcontrib/__

  Much like the PostgreSQL contrib/ directory, this directory contains
  extensions such as gpfdist, PXF and gpmapreduce which are Greenplum-specific.

225 226 227 228 229 230 231 232
* __doc/__

  In PostgreSQL, the user manual lives here. In Greenplum, the user
  manual is maintained separately and only the reference pages used
  to build man pages are here.

* __gpdb-doc/__

233
  Contains the Greenplum documentation in DITA XML format. Refer to
234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251
  `gpdb-doc/README.md` for information on how to build, and work with
  the documentation.

* __ci/__

  Contains configuration files for the GPDB continuous integration system.

* __src/backend/cdb/__

  Contains larger Greenplum-specific backend modules. For example,
  communication between segments, turning plans into parallelizable
  plans, mirroring, distributed transaction and snapshot management,
  etc. __cdb__ stands for __Cluster Database__ - it was a workname used in
  the early days. That name is no longer used, but the __cdb__ prefix
  remains.

* __src/backend/gpopt/__

252
  Contains the so-called __translator__ library, for using the GPORCA
253 254
  optimizer with Greenplum. The translator library is written in C++
  code, and contains glue code for translating plans and queries
255
  between the DXL format used by GPORCA, and the PostgreSQL internal
256 257 258 259 260 261 262
  representation.

* __src/backend/fts/__

  FTS is a process that runs in the master node, and periodically
  polls the segments to maintain the status of each segment.

263 264
## Contributing

265 266
Greenplum is maintained by a core team of developers with commit rights to the
[main gpdb repository](https://github.com/greenplum-db/gpdb) on GitHub. At the
267 268
same time, we are very eager to receive contributions from anybody in the wider
Greenplum community. This section covers all you need to know if you want to see
R
Roman Shaposhnik 已提交
269 270
your code or documentation changes be added to Greenplum and appear in the
future releases.
271 272 273

### Getting started

274
Greenplum is developed on GitHub, and anybody wishing to contribute to it will
275
have to [have a GitHub account](https://github.com/signup/free) and be familiar
276
with [Git tools and workflow](https://wiki.postgresql.org/wiki/Working_with_Git).
277
It is also recommend that you follow the [developer's mailing list](https://greenplum.org/community/)
278 279 280
since some of the contributions may generate more detailed discussions there.

Once you have your GitHub account, [fork](https://github.com/greenplum-db/gpdb/fork)
281
this repository so that you can have your private copy to start hacking on and to
282 283 284 285 286 287 288
use as source of pull requests.

Anybody contributing to Greenplum has to be covered by either the Corporate or
the Individual Contributor License Agreement. If you have not previously done
so, please fill out and submit the [Contributor License Agreement](https://cla.pivotal.io/sign/greenplum).
Note that we do allow for really trivial changes to be contributed without a
CLA if they fall under the rubric of [obvious fixes](https://cla.pivotal.io/about#obvious-fixes).
289 290 291
However, since our GitHub workflow checks for CLA by default you may find it
easier to submit one instead of claiming an "obvious fix" exception.

292 293 294 295
### Licensing of Greenplum contributions

If the contribution you're submitting is original work, you can assume that Pivotal
will release it as part of an overall Greenplum release available to the downstream
296
consumers under the Apache License, Version 2.0. However, in addition to that, Pivotal
297 298
may also decide to release it under a different license (such as [PostgreSQL License](https://www.postgresql.org/about/licence/) to the upstream consumers that require it. A typical example here would be Pivotal
upstreaming your contribution back to PostgreSQL community (which can be done either
299
verbatim or your contribution being upstreamed as part of the larger changeset).
300 301 302 303

If the contribution you're submitting is NOT original work you have to indicate the name
of the license and also make sure that it is similar in terms to the Apache License 2.0.
Apache Software Foundation maintains a list of these licenses under [Category A](https://www.apache.org/legal/resolved.html#category-a). In addition to that, you may be required to make proper attribution in the
304
[NOTICE file](https://github.com/greenplum-db/gpdb/blob/master/NOTICE) similar to [these examples](https://github.com/greenplum-db/gpdb/blob/master/NOTICE#L278).
305 306 307 308

Finally, keep in mind that it is NEVER a good idea to remove licensing headers from
the work that is not your original one. Even if you are using parts of the file that
originally had a licensing header at the top you should err on the side of preserving it.
309
As always, if you are not quite sure about the licensing implications of your contributions,
310 311
feel free to reach out to us on the developer mailing list.

312 313 314 315
### Coding guidelines

Your chances of getting feedback and seeing your code merged into the project
greatly depend on how granular your changes are. If you happen to have a bigger
316
change in mind, we highly recommend engaging on the developer's mailing list
317 318 319
first and sharing your proposal with us before you spend a lot of time writing
code. Even when your proposal gets validated by the community, we still recommend
doing the actual work as a series of small, self-contained commits. This makes
320
the reviewer's job much easier and increases the timeliness of feedback.
321

322
When it comes to C and C++ parts of Greenplum, we try to follow
323
[PostgreSQL Coding Conventions](https://www.postgresql.org/docs/devel/source.html).
324 325 326 327 328 329 330 331
In addition to that we require that:
   * All Python code passes [Pylint](https://www.pylint.org/)
   * All Go code is formatted according to [gofmt](https://golang.org/cmd/gofmt/)

We recommend using ```git diff --color``` when reviewing your changes so that you
don't have any spurious whitespace issues in the code that you submit.

All new functionality that is contributed to Greenplum should be covered by regression
332 333
tests that are contributed alongside it. If you are uncertain on how to test or document
your work, please raise the question on the gpdb-dev mailing list and the developer
R
Roman Shaposhnik 已提交
334
community will do its best to help you.
335

336
At the very minimum you should always be running
337
```make installcheck-world```
338 339 340 341 342
to make sure that you're not breaking anything.

### Changes applicable to upstream PostgreSQL

If the change you're working on touches functionality that is common between PostgreSQL
343
and Greenplum, you may be asked to forward-port it to PostgreSQL. This is not only so
344 345
that we keep reducing the delta between the two projects, but also so that any change
that is relevant to PostgreSQL can benefit from a much broader review of the upstream
346 347
PostgreSQL community. In general, it is a good idea to keep both code bases handy so
you can be sure whether your changes may need to be forward-ported.
348 349 350

### Submission timing

351 352
To improve the odds of the right discussion of your patch or idea happening, pay attention
to what the community work cycle is. For example, if you send in a brand new idea in the
353 354
beta phase of a release, we may defer review or target its inclusion for a later version.
Feel free to ask on the mailing list to learn more about the Greenplum release policy and timing.
355 356 357 358

### Patch submission

Once you are ready to share your work with the Greenplum core team and the rest of
359
the Greenplum community, you should push all the commits to a branch in your own
360 361 362 363 364 365 366 367 368 369 370 371
repository forked from the official Greenplum and
[send us a pull request](https://help.github.com/articles/about-pull-requests/).

We welcome submissions which are work in-progress in order to get feedback early
in the development process.  When opening the pull request, select "Draft" in
the dropdown menu when creating the PR to clearly mark the intent of the pull
request. Prefixing the title with "WIP:" is also good practice.

All new features should be submitted against the main master branch. Bugfixes
should too be submitted against master unless they only exist in a supported
back-branch. If the bug exists in both master and back-branches, explain this
in the PR description.
372 373 374

### Validation checks and CI

375 376 377 378 379 380
Once you submit your pull request, you will immediately see a number of validation
checks performed by our automated CI pipelines. There also will be a CLA check
telling you whether your CLA was recognized. If any of these checks fails, you
will need to update your pull request to take care of the issue. Pull requests
with failed validation checks are very unlikely to receive any further peer
review from the community members.
381 382 383 384 385

Keep in mind that the most common reason for a failed CLA check is a mismatch
between an email on file and an email recorded in the commits submitted as
part of the pull request.

386
If you cannot figure out why a certain validation check failed, feel free to
387 388 389 390 391
ask on the developer's mailing list, but make sure to include a direct link
to a pull request in your email.

### Patch review

392 393
A submitted pull request with passing validation checks is assumed to be available
for peer review. Peer review is the process that ensures that contributions to Greenplum
394
are of high quality and align well with the road map and community expectations. Every
395 396
member of the Greenplum community is encouraged to review pull requests and provide
feedback. Since you don't have to be a core team member to be able to do that, we
397
recommend following a stream of pull reviews to anybody who's interested in becoming
398
a long-term contributor to Greenplum. As [Linus would say](https://en.wikipedia.org/wiki/Linus's_Law)
399 400 401 402 403 404 405 406
"given enough eyeballs, all bugs are shallow".

One outcome of the peer review could be a consensus that you need to modify your
pull request in certain ways. GitHub allows you to push additional commits into
a branch from which a pull request was sent. Those additional commits will be then
visible to all of the reviewers.

A peer review converges when it receives at least one +1 and no -1s votes from
407
the participants. At that point you should expect one of the core team
408 409
members to pull your changes into the project.

410 411
Greenplum prides itself on being a collaborative, consensus-driven environment.
We do not believe in vetoes and any -1 vote casted as part of the peer review
412 413 414 415
has to have a detailed technical explanation of what's wrong with the change.
Should a strong disagreement arise it may be advisable to take the matter onto
the mailing list since it allows for a more natural flow of the conversation.

416 417 418 419 420
At any time during the patch review, you may experience delays based on the
availability of reviewers and core team members. Please be patient. That being
said, don't get discouraged either. If you're not getting expected feedback for
a few days add a comment asking for updates on the pull request itself or send
an email to the mailing list.
421 422 423 424 425 426

### Direct commits to the repository

On occasion you will see core team members committing directly to the repository
without going through the pull request workflow. This is reserved for small changes
only and the rule of thumb we use is this: if the change touches any functionality
427
that may result in a test failure, then it has to go through a pull request workflow.
428 429 430
If, on the other hand, the change is in the non-functional part of the code base
(such as fixing a typo inside of a comment block) core team members can decide to
just commit to the repository directly.
431

432 433
## Documentation

434
For Greenplum Database documentation, please check the
435
[online documentation](http://docs.greenplum.org/).
J
Jignesh Patel 已提交
436

437 438
For further information beyond the scope of this README, please see
[our wiki](https://github.com/greenplum-db/gpdb/wiki)