README.md 19.5 KB
Newer Older
R
Roman Shaposhnik 已提交
1 2 3
## Travis [![Travis Build Status](https://travis-ci.org/greenplum-db/gpdb.svg?branch=master)](https://travis-ci.org/greenplum-db/gpdb)

## Concourse [![Concourse Build Status](https://gpdb.ci.pivotalci.info/api/v1/teams/gpdb/pipelines/gpdb_master/jobs/gpdb_rc_packaging_centos/badge)](https://gpdb.ci.pivotalci.info/teams/gpdb)
4 5 6

----------------------------------------------------------------------

7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
![Greenplum](/gpAux/releng/images/logo-greenplum.png)

The Greenplum Database (GPDB) is an advanced, fully featured, open
source data warehouse. It provides powerful and rapid analytics on
petabyte scale data volumes. Uniquely geared toward big data
analytics, Greenplum Database is powered by the world’s most advanced
cost-based query optimizer delivering high analytical query
performance on large data volumes.

The Greenplum project is released under the [Apache 2
license](http://www.apache.org/licenses/LICENSE-2.0). We want to thank
all our current community contributors and are really interested in
all new potential contributions. For the Greenplum Database community
no contribution is too small, we encourage all types of contributions.

## Overview

A Greenplum cluster consists of a __master__ server, and multiple
__segment__ servers. All user data resides in the segments, the master
contains only metadata. The master server, and all the segments, share
the same schema.

Users always connect to the master server, which divides up the query
into fragments that are executed in the segments, sends the fragments
to the segments, and collects the results.

33
## Building Greenplum Database with GPORCA
34

35 36
For macOS X developers, follow [these steps](README.macOS.md) for getting your system ready for GPDB 

37 38 39 40
Currently GPDB assumes ORCA libraries and headers are available in the targeted
system and tries to build with ORCA by default.  For your convenience, here are
the steps of how to build the optimizer. For the most up-to-date way of
building, see the README at the following repositories:
41

42 43 44 45 46 47 48 49 50 51 52
1. https://github.com/greenplum-db/gp-xerces
1. https://github.com/greenplum-db/gporca

<a name="buildOrca"></a>
### Build the optimizer

1. Install our patched version of Xerces-C

    ```
    git clone https://github.com/greenplum-db/gp-xerces
    mkdir gp-xerces/build
53
    cd gp-xerces/build
54 55
    ../configure
    make install
56
    cd ../..
57 58 59 60 61 62 63 64 65 66 67
    ```

1. ORCA requires [CMake](https://cmake.org) and
   [Ninja](https://ninja-build.org/), make sure you have them installed.
   Installation instructions vary, please check the CMake and Ninja websites.

1. Install ORCA, the query optimizer:

    ```
    git clone https://github.com/greenplum-db/gporca
    mkdir gporca/build
68
    cd gporca/build
69 70
    cmake -GNinja ..
    ninja install
71
    cd ../..
72 73 74 75 76
    ```
    **Note**: Get the latest ORCA `git pull --ff-only` if you see an error message like below:
    ```
    checking Checking ORCA version... configure: error: Your ORCA version is expected to be 2.33.XXX
    ```
77 78 79 80 81 82 83 84 85 86 87 88

### Install needed python modules

  Add the following Python modules (2.7 & 2.6 are supported)

  * psutil
  * lockfile (>= 0.9.1)
  * paramiko
  * setuptools

  If necessary, upgrade modules using "pip install --upgrade".
  pip should be at least version 7.x.x.
89 90
    
### Build the database
91
```
X
Xin Zhang 已提交
92
# Configure build environment to install at /usr/local/gpdb
93
./configure --with-perl --with-python --with-libxml --prefix=/usr/local/gpdb
94

X
Xin Zhang 已提交
95
# Compile and install
96 97 98
make
make install

X
Xin Zhang 已提交
99 100
# Bring in greenplum environment into your running shell
source /usr/local/gpdb/greenplum_path.sh
101

X
Xin Zhang 已提交
102 103
# Start demo cluster (gpdemo-env.sh is created which contain
# __PGPORT__ and __MASTER_DATA_DIRECTORY__ values)
104
cd gpAux/gpdemo
T
Todd Sedano 已提交
105
make create-demo-cluster
106 107 108
source gpdemo-env.sh
```

109 110 111 112 113 114 115 116
Compilation can be sped up with parallelization. Instead of `make`, consider:

```
make -j8
```

The directory and the TCP ports for the demo cluster can be changed on the fly.
Instead of `make cluster`, consider:
117 118 119 120 121 122 123 124

```
DATADIRS=/tmp/gpdb-cluster MASTER_PORT=15432 PORT_BASE=25432 make cluster
```

The TCP port for the regression test can be changed on the fly:

```
125
PGPORT=15432 make installcheck-world
126 127
```

128
Once build and started, run `psql` and check the GPOPT (e.g. GPORCA) version:
129

130 131 132
```
select gp_opt_version();
```
133

134
To turn ORCA off and use legacy planner for query optimization:
X
Xin Zhang 已提交
135
```
136
set optimizer=off;
X
Xin Zhang 已提交
137 138
```

139 140 141 142 143
If you want to clean all generated files
```
make distclean
```

144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166
## Running tests

* The default regression tests

```
make installcheck-world
```

* The top-level target __installcheck-world__ will run all regression
  tests in GPDB against the running cluster. For testing individual
  parts, the respective targets can be run separately.

* The PostgreSQL __check__ target does not work. Setting up a
  Greenplum cluster is more complicated than a single-node PostgreSQL
  installation, and no-one's done the work to have __make check__
  create a cluster. Create a cluster manually or use gpAux/gpdemo/
  (example below) and run the toplevel __make installcheck-world__
  against that. Patches are welcome!

* The PostgreSQL __installcheck__ target does not work either, because
  some tests are known to fail with Greenplum. The
  __installcheck-good__ schedule in __src/test/regress__ excludes those
  tests.
167

168 169 170 171 172 173 174 175
* When adding a new test, please add it to one of the GPDB-specific tests,
  in greenplum_schedule, rather than the PostgreSQL tests inherited from the
  upstream. We try to keep the upstream tests identical to the upstream
  versions, to make merging with newer PostgreSQL releases easier.

## Alternative Configurations

### Building GPDB without GPORCA
176 177 178 179
Currently, GPDB is built with ORCA by default so latest ORCA libraries and headers need
to be available in the environment. [Build and Install](#buildOrca) the latest ORCA.

If you want to build GPDB without ORCA, configure requires `--disable-orca` flag to be set.
X
Xin Zhang 已提交
180 181

```
182 183 184 185 186
# Clean environment
make distclean

# Configure build environment to install at /usr/local/gpdb
./configure --disable-orca --with-perl --with-python --with-libxml --prefix=/usr/local/gpdb
X
Xin Zhang 已提交
187 188
```

189
### Building GPDB with code generation enabled
190 191 192 193 194 195 196 197 198 199 200 201

To build GPDB with code generation (codegen) enabled, you will need cmake 2.8 or higher
and a recent version of llvm and clang (include headers and developer libraries). Codegen utils
is currently developed against the LLVM 3.7.X release series. You can find more details about the codegen feature,
including details about obtaining the prerequisites, building and testing GPDB with codegen in the [Codegen README](src/backend/codegen).

In short, you can change the `configure` with additional option
`--enable-codegen`, optionally giving the path to llvm and clang libraries on
your system.
```
# Configure build environment to install at /usr/local/gpdb
# Enable CODEGEN
202
./configure --with-perl --with-python --with-libxml --enable-codegen --prefix=/usr/local/gpdb --with-codegen-prefix="/path/to/llvm;/path/to/clang"
203 204
```

205
### Building GPDB with gpperfmon enabled
206 207 208 209 210 211 212 213 214

gpperfmon tracks a variety of queries, statistics, system properties, and metrics.
To build with it enabled, change your `configure` to have an additional option
`--enable-gpperfmon`

See [more information about gpperfmon here](gpAux/gpperfmon/README.md)

gpperfmon is dependent on several libraries like apr, apu, and libsigar

215
## Development with Docker
216 217

We provide a docker image with all dependencies required to compile and test
218
GPDB. You can view the dependency dockerfile at `./src/tools/docker/base/Dockerfile`.
219 220 221
The image is hosted on docker hub at `pivotaldata/gpdb-devel`. This docker
image is currently under heavy development.

222 223
A quickstart guide to Docker can be found on the [Pivotal Engineering Journal](http://engineering.pivotal.io/post/docker-gpdb/).

224
Known issues:
225
* The `installcheck-world` make target has at least 4 failures, some of which
226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250
  are non-deterministic

### Running regression tests with Docker

1. Create a docker host with 8gb RAM and 4 cores
    ```bash
    docker-machine create -d virtualbox --virtualbox-cpu-count 4 --virtualbox-disk-size 50000 --virtualbox-memory 8192 gpdb
    eval $(docker-machine env gpdb)
    ```

1. Build your code on gpdb-devel rootfs
    ```bash
    cd [path/to/gpdb]
    docker build .
    # image beefc4f3 built
    ```
    The top level Dockerfile will automatically sync your current working
    directory into the docker image. This means that any code you are working
    on will automatically be built and ready for testing in the docker context

1. Log into docker image
    ```bash
    docker run -it beefc4f3
    ```

251
1. As `gpadmin` user run `installcheck-world`
252 253 254
    ```bash
    su gpadmin
    cd /workspace/gpdb
255
    make installcheck-world
256 257 258 259 260 261 262 263 264 265 266 267 268 269 270
    ```

### Caveats

* No Space Left On Device
    On macOS the docker-machine vm can periodically become full with unused images.
    You can clear these images with a combination of docker commands.
    ```bash
    # assuming no currently running containers
    # remove all stopped containers from cache
    docker ps -aq | xargs -n 1 docker rm
    # remove all untagged images
    docker images -aq --filter dangling=true | xargs -n 1 docker rmi
    ```

271 272
* The Native macOS docker client available with docker 1.12+ (beta) or
  Community Edition 17+ may also work
273

274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336
## Code layout

The directory layout of the repository follows the same general layout
as upstream PostgreSQL. There are changes compared to PostgreSQL
throughout the codebase, but a few larger additions worth noting:

* __gpMgmt/__

  Contains Greenplum-specific command-line tools for managing the
  cluster. Scripts like gpinit, gpstart, gpstop live here. They are
  mostly written in Python.

* __gpAux/__

  Contains Greenplum-specific extensions such as gpfdist and
  gpmapreduce.  Some additional directories are submodules and will be
  made available over time.

* __doc/__

  In PostgreSQL, the user manual lives here. In Greenplum, the user
  manual is maintained separately and only the reference pages used
  to build man pages are here.

* __gpdb-doc/__

  Constains the Greenplum documentation in DITA XML format. Refer to
  `gpdb-doc/README.md` for information on how to build, and work with
  the documentation.

* __ci/__

  Contains configuration files for the GPDB continuous integration system.

* __src/backend/cdb/__

  Contains larger Greenplum-specific backend modules. For example,
  communication between segments, turning plans into parallelizable
  plans, mirroring, distributed transaction and snapshot management,
  etc. __cdb__ stands for __Cluster Database__ - it was a workname used in
  the early days. That name is no longer used, but the __cdb__ prefix
  remains.

* __src/backend/gpopt/__

  Contains the so-called __translator__ library, for using the ORCA
  optimizer with Greenplum. The translator library is written in C++
  code, and contains glue code for translating plans and queries
  between the DXL format used by ORCA, and the PostgreSQL internal
  representation.

* __src/backend/gp_libpq_fe/__

  A slightly modified copy of libpq. The master node uses this to
  connect to segments, and to send fragments of a query plan to
  segments for execution. It is linked directly into the backend, it
  is not a shared library like libpq.

* __src/backend/fts/__

  FTS is a process that runs in the master node, and periodically
  polls the segments to maintain the status of each segment.

337 338
## Contributing

339 340
Greenplum is maintained by a core team of developers with commit rights to the
[main gpdb repository](https://github.com/greenplum-db/gpdb) on GitHub. At the
341 342
same time, we are very eager to receive contributions from anybody in the wider
Greenplum community. This section covers all you need to know if you want to see
R
Roman Shaposhnik 已提交
343 344
your code or documentation changes be added to Greenplum and appear in the
future releases.
345 346 347

### Getting started

348
Greenplum is developed on GitHub, and anybody wishing to contribute to it will
349 350
have to [have a GitHub account](https://github.com/signup/free) and be familiar
with [Git tools and workflow](https://wiki.postgresql.org/wiki/Working_with_Git). 
351
It is also recommend that you follow the [developer's mailing list](http://greenplum.org/#contribute)
352 353 354
since some of the contributions may generate more detailed discussions there.

Once you have your GitHub account, [fork](https://github.com/greenplum-db/gpdb/fork)
355 356 357 358 359 360 361 362
repository so that you can have your private copy to start hacking on and to
use as source of pull requests.

Anybody contributing to Greenplum has to be covered by either the Corporate or
the Individual Contributor License Agreement. If you have not previously done
so, please fill out and submit the [Contributor License Agreement](https://cla.pivotal.io/sign/greenplum).
Note that we do allow for really trivial changes to be contributed without a
CLA if they fall under the rubric of [obvious fixes](https://cla.pivotal.io/about#obvious-fixes).
363 364 365 366 367 368 369
However, since our GitHub workflow checks for CLA by default you may find it
easier to submit one instead of claiming an "obvious fix" exception.

### Coding guidelines

Your chances of getting feedback and seeing your code merged into the project
greatly depend on how granular your changes are. If you happen to have a bigger
370
change in mind, we highly recommend engaging on the developer's mailing list
371 372 373
first and sharing your proposal with us before you spend a lot of time writing
code. Even when your proposal gets validated by the community, we still recommend
doing the actual work as a series of small, self-contained commits. This makes
374
the reviewer's job much easier and increases the timeliness of feedback.
375 376 377 378 379 380 381 382 383 384 385

When it comes to C and C++ parts of Greenplum, we try to follow 
[PostgreSQL Coding Conventions](https://www.postgresql.org/docs/devel/static/source.html).
In addition to that we require that:
   * All Python code passes [Pylint](https://www.pylint.org/)
   * All Go code is formatted according to [gofmt](https://golang.org/cmd/gofmt/)

We recommend using ```git diff --color``` when reviewing your changes so that you
don't have any spurious whitespace issues in the code that you submit.

All new functionality that is contributed to Greenplum should be covered by regression
R
Roman Shaposhnik 已提交
386 387 388
tests that are contributed alongside it. If you are uncertain on how to test, or document 
your work, please raise the question on the gpdb-dev mailinglist and the developer 
community will do its best to help you.
389 390 391 392

### Testing guidelines

At the very minimum you should always be running 
393
```make installcheck-world```
394 395 396 397 398
to make sure that you're not breaking anything.

### Changes applicable to upstream PostgreSQL

If the change you're working on touches functionality that is common between PostgreSQL
399
and Greenplum, you may be asked to forward-port it to PostgreSQL. This is not only so
400 401
that we keep reducing the delta between the two projects, but also so that any change
that is relevant to PostgreSQL can benefit from a much broader review of the upstream
402 403
PostgreSQL community. In general, it is a good idea to keep both code bases handy so
you can be sure whether your changes may need to be forward-ported.
404 405 406 407

### Submission timing

To improve the odds of the right discussion of your patch or idea happening, pay attention 
408
to what the community work cycle is. For example, if you send in a brand new idea in the 
409
beta phase, don't be surprised if no one is paying attention because we are focused on 
410 411 412 413 414 415 416
release work. Come back when the beta is done, please!

You can read more on Greenplum release policy and timing in the RELEASE.md

### Patch submission

Once you are ready to share your work with the Greenplum core team and the rest of
417
the Greenplum community, you should push all the commits to a branch in your own 
418 419
repository forked from the official Greenplum and [send us a pull request](https://help.github.com/articles/about-pull-requests/).

420
For now, we require all pull requests to be submitted against the main master
421
branch, but over time, once there are many supported open source releases of Greenplum
422
in the wild, you may decide to submit your pull requests against an active
423 424 425 426
release branch if the change is only applicable to a given release.

### Validation checks and CI

427 428 429 430 431 432
Once you submit your pull request, you will immediately see a number of validation
checks performed by our automated CI pipelines. There also will be a CLA check
telling you whether your CLA was recognized. If any of these checks fails, you
will need to update your pull request to take care of the issue. Pull requests
with failed validation checks are very unlikely to receive any further peer
review from the community members.
433 434 435 436 437

Keep in mind that the most common reason for a failed CLA check is a mismatch
between an email on file and an email recorded in the commits submitted as
part of the pull request.

438
If you can not figure out why a certain validation check failed, feel free to
439 440 441 442 443 444 445 446
ask on the developer's mailing list, but make sure to include a direct link
to a pull request in your email.

### Patch review

A submitted pull request with passing validation checks is assumed to be available 
for peer review. Peer review is the process that ensures that contributions to Greenplum 
are of high quality and align well with the road map and community expectations. Every
447 448
member of the Greenplum community is encouraged to review pull requests and provide
feedback. Since you don't have to be a core team member to be able to do that, we
449
recommend following a stream of pull reviews to anybody who's interested in becoming
450
a long-term contributor to Greenplum. As [Linus would say](https://en.wikipedia.org/wiki/Linus's_Law) 
451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467
"given enough eyeballs, all bugs are shallow".

One outcome of the peer review could be a consensus that you need to modify your
pull request in certain ways. GitHub allows you to push additional commits into
a branch from which a pull request was sent. Those additional commits will be then
visible to all of the reviewers.

A peer review converges when it receives at least one +1 and no -1s votes from
the participants. At that point you should expect one of the core team 
members to pull your changes into the project.

Greenplum prides itself on being a collaborative, consensus-driven environment. 
We do not believe in vetoes and any -1 vote casted as part of the peer review 
has to have a detailed technical explanation of what's wrong with the change.
Should a strong disagreement arise it may be advisable to take the matter onto
the mailing list since it allows for a more natural flow of the conversation.

468 469 470 471 472
At any time during the patch review, you may experience delays based on the
availability of reviewers and core team members. Please be patient. That being
said, don't get discouraged either. If you're not getting expected feedback for
a few days add a comment asking for updates on the pull request itself or send
an email to the mailing list.
473 474 475 476 477 478

### Direct commits to the repository

On occasion you will see core team members committing directly to the repository
without going through the pull request workflow. This is reserved for small changes
only and the rule of thumb we use is this: if the change touches any functionality
479
that may result in a test failure, then it has to go through a pull request workflow.
480 481 482
If, on the other hand, the change is in the non-functional part of the code base
(such as fixing a typo inside of a comment block) core team members can decide to
just commit to the repository directly.
483

484 485 486 487 488 489 490 491 492 493 494 495 496
## Glossary

* __QD__

  Query Dispatcher. A synonym for the master server.

* __QE__

  Query Executor. A synonym for a segment server.

## Documentation

For Greenplum Database documentation, please check online docs:
497
http://greenplum.org/docs/
J
Jignesh Patel 已提交
498

499 500 501
For further information beyond the scope of this README, please see
[our wiki](https://github.com/greenplum-db/gpdb/wiki)

502
There is also a Vagrant-based quickstart guide for developers in `src/tools/vagrant/README.md`.