README.md 9.9 KB
Newer Older
1 2 3 4
[![Build Status](https://travis-ci.org/greenplum-db/gpdb.svg?branch=master)](https://travis-ci.org/greenplum-db/gpdb)

----------------------------------------------------------------------

5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
![Greenplum](/gpAux/releng/images/logo-greenplum.png)

The Greenplum Database (GPDB) is an advanced, fully featured, open
source data warehouse. It provides powerful and rapid analytics on
petabyte scale data volumes. Uniquely geared toward big data
analytics, Greenplum Database is powered by the world’s most advanced
cost-based query optimizer delivering high analytical query
performance on large data volumes.

The Greenplum project is released under the [Apache 2
license](http://www.apache.org/licenses/LICENSE-2.0). We want to thank
all our current community contributors and are really interested in
all new potential contributions. For the Greenplum Database community
no contribution is too small, we encourage all types of contributions.

## Overview

A Greenplum cluster consists of a __master__ server, and multiple
__segment__ servers. All user data resides in the segments, the master
contains only metadata. The master server, and all the segments, share
the same schema.

Users always connect to the master server, which divides up the query
into fragments that are executed in the segments, sends the fragments
to the segments, and collects the results.

## Requirements

* From the GPDB doc set, review [Configuring Your Systems and
  Installing
  Greenplum](http://gpdb.docs.pivotal.io/4360/prep_os-overview.html#topic1)
  and perform appropriate updates to your system for GPDB use.

* **gpMgmt** utilities - command line tools for managing the cluster.

  You will need to add the following Python modules (2.7 & 2.6 are
  supported) into your installation

43
  * psutil
44
  * lockfile (>= 0.9.1)
45 46 47 48
  * paramiko
  * setuptools
  * epydoc

49 50 51
  If necessary, upgrade modules using "pip install --upgrade".
  pip should be at least version 7.x.x.

52

53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76
## Code layout

The directory layout of the repository follows the same general layout
as upstream PostgreSQL. There are changes compared to PostgreSQL
throughout the codebase, but a few larger additions worth noting:

* __gpMgmt/__

  Contains Greenplum-specific command-line tools for managing the
  cluster. Scripts like gpinit, gpstart, gpstop live here. They are
  mostly written in Python.

* __gpAux/__

  Contains Greenplum-specific extensions such as gpfdist and
  gpmapreduce.  Some additional directories are submodules and will be
  made available over time.

* __doc/__

  In PostgreSQL, the user manual lives here. In Greenplum, the user
  manual is distributed separately (see http://gpdb.docs.pivotal.io),
  and only the reference pages used to build man pages are here.

77 78 79 80 81
* __ci/__

  Contains configuration files for the GPDB continuous integration system.
 

82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111
* __src/backend/cdb/__

  Contains larger Greenplum-specific backend modules. For example,
  communication between segments, turning plans into parallelizable
  plans, mirroring, distributed transaction and snapshot management,
  etc. __cdb__ stands for __Cluster Database__ - it was a workname used in
  the early days. That name is no longer used, but the __cdb__ prefix
  remains.

* __src/backend/gpopt/__

  Contains the so-called __translator__ library, for using the ORCA
  optimizer with Greenplum. The translator library is written in C++
  code, and contains glue code for translating plans and queries
  between the DXL format used by ORCA, and the PostgreSQL internal
  representation. This goes unused, unless building with
  _--enable-orca_.

* __src/backend/gp_libpq_fe/__

  A slightly modified copy of libpq. The master node uses this to
  connect to segments, and to send fragments of a query plan to
  segments for execution. It is linked directly into the backend, it
  is not a shared library like libpq.

* __src/backend/fts/__

  FTS is a process that runs in the master node, and periodically
  polls the segments to maintain the status of each segment.

112 113 114 115 116 117
## Building GPDB

Some configure options are nominally optional, but required to pass
all regression tests. The minimum set of options for running the
regression tests successfully is:

118
`./configure --with-perl --with-python --with-libxml --enable-mapreduce`
119 120

### Build GPDB with Planner
121 122

```
X
Xin Zhang 已提交
123 124 125
# Clean environment
make distclean

X
Xin Zhang 已提交
126
# Configure build environment to install at /usr/local/gpdb
127
./configure --with-perl --with-python --with-libxml --enable-mapreduce --prefix=/usr/local/gpdb
128

X
Xin Zhang 已提交
129
# Compile and install
130 131 132
make
make install

X
Xin Zhang 已提交
133 134
# Bring in greenplum environment into your running shell
source /usr/local/gpdb/greenplum_path.sh
135

X
Xin Zhang 已提交
136 137
# Start demo cluster (gpdemo-env.sh is created which contain
# __PGPORT__ and __MASTER_DATA_DIRECTORY__ values)
138 139 140 141 142
cd gpAux/gpdemo
make cluster
source gpdemo-env.sh
```

143 144 145 146 147 148 149 150 151 152 153 154 155
The directory and the TCP ports for the demo cluster can be changed on the fly:

```
DATADIRS=/tmp/gpdb-cluster MASTER_PORT=15432 PORT_BASE=25432 make cluster
```

The TCP port for the regression test can be changed on the fly:

```
PGPORT=15432 make installcheck-good
```


156
### Build GPDB with GPORCA
157
You must first install the below libraries in the below order (see the READMEs on each repository):
X
Xin Zhang 已提交
158

159 160 161 162 163
1. https://github.com/greenplum-db/gp-xerces
2. https://github.com/greenplum-db/gpos
3. https://github.com/greenplum-db/gporca

Next, change your `configure` command to have the additional option `--enable-orca`.
X
Xin Zhang 已提交
164 165
```
# Configure build environment to install at /usr/local/gpdb
X
Xin Zhang 已提交
166
# Enable GPORCA
167 168 169
# Build with perl module (PL/Perl)
# Build with python module (PL/Python)
# Build with XML support
D
Dhanashree Kashid 已提交
170
./configure --with-perl --with-python --with-libxml --enable-mapreduce --enable-orca --prefix=/usr/local/gpdb
X
Xin Zhang 已提交
171 172
```

X
Xin Zhang 已提交
173 174 175 176 177 178
Once build and started, run `psql` and check the GPOPT (e.g. GPORCA) version:

```
select gp_opt_version();
```

179
### Build GPDB with code generation enabled
180 181 182 183 184 185 186 187 188 189 190 191

To build GPDB with code generation (codegen) enabled, you will need cmake 2.8 or higher
and a recent version of llvm and clang (include headers and developer libraries). Codegen utils
is currently developed against the LLVM 3.7.X release series. You can find more details about the codegen feature,
including details about obtaining the prerequisites, building and testing GPDB with codegen in the [Codegen README](src/backend/codegen).

In short, you can change the `configure` with additional option
`--enable-codegen`, optionally giving the path to llvm and clang libraries on
your system.
```
# Configure build environment to install at /usr/local/gpdb
# Enable CODEGEN
192
./configure --with-perl --with-python --with-libxml ---enable-mapreduce --enable-codegen --prefix=/usr/local/gpdb --with-codegen-prefix="/path/to/llvm;/path/to/clang"
193 194
```

X
Xin Zhang 已提交
195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219
## Regression tests

* The default regression tests

```
make installcheck-good
```

* optional extra/heavier regression tests

```
make installcheck-bugbuster
```

* The PostgreSQL __check__ target does not work. Setting up a
  Greenplum cluster is more complicated than a single-node PostgreSQL
  installation, and no-one's done the work to have __make check__
  create a cluster. Create a cluster manually or use gpAux/gpdemo/
  (example below) and run __make installcheck-good__ against
  that. Patches are welcome!

* The PostgreSQL __installcheck__ target does not work either, because
  some tests are known to fail with Greenplum. The
  __installcheck-good__ schedule excludes those tests.

220 221 222 223 224
* When adding a new test, please add it to one of the GPDB-specific tests,
  in greenplum_schedule, rather than the PostgreSQL tests inherited from the
  upstream. We try to keep the upstream tests identical to the upstream
  versions, to make merging with newer PostgreSQL releases easier.

225
## Development with Docker
226 227

We provide a docker image with all dependencies required to compile and test
228
GPDB. You can view the dependency dockerfile at `./src/tools/docker/base/Dockerfile`.
229 230 231
The image is hosted on docker hub at `pivotaldata/gpdb-devel`. This docker
image is currently under heavy development.

232 233
A quickstart guide to Docker can be found on the [Pivotal Engineering Journal](http://engineering.pivotal.io/post/docker-gpdb/).

234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283
Known issues:
* The `installcheck-good` make target has at least 4 failures, some of which
  are non-deterministic

### Running regression tests with Docker

1. Create a docker host with 8gb RAM and 4 cores
    ```bash
    docker-machine create -d virtualbox --virtualbox-cpu-count 4 --virtualbox-disk-size 50000 --virtualbox-memory 8192 gpdb
    eval $(docker-machine env gpdb)
    ```

1. Build your code on gpdb-devel rootfs
    ```bash
    cd [path/to/gpdb]
    docker build .
    # image beefc4f3 built
    ```
    The top level Dockerfile will automatically sync your current working
    directory into the docker image. This means that any code you are working
    on will automatically be built and ready for testing in the docker context

1. Log into docker image
    ```bash
    docker run -it beefc4f3
    ```

1. As `gpadmin` user run `installcheck-good`
    ```bash
    su gpadmin
    cd /workspace/gpdb
    make installcheck-good
    ```

### Caveats

* No Space Left On Device
    On macOS the docker-machine vm can periodically become full with unused images.
    You can clear these images with a combination of docker commands.
    ```bash
    # assuming no currently running containers
    # remove all stopped containers from cache
    docker ps -aq | xargs -n 1 docker rm
    # remove all untagged images
    docker images -aq --filter dangling=true | xargs -n 1 docker rmi
    ```

    Alternatively you can use the (beta) Native macOS docker client now available
    in docker 1.12.

284 285 286 287 288
## Contributing

If you have not previously done so, please fill out and
submit the [Contributor License Agreement](https://cla.pivotal.io/sign/greenplum).

289 290 291 292 293 294 295 296 297 298 299 300 301 302
## Glossary

* __QD__

  Query Dispatcher. A synonym for the master server.

* __QE__

  Query Executor. A synonym for a segment server.

## Documentation

For Greenplum Database documentation, please check online docs:
http://gpdb.docs.pivotal.io
J
Jignesh Patel 已提交
303

304
There is also a Vagrant-based quickstart guide for developers in `src/tools/vagrant/README.md`.