- 11 6月, 2018 4 次提交
-
-
由 Jialun 提交于
-
由 Violet Cheng 提交于
Gpperfmon table rows_out queries_history shows zero values under column "rows_out", even though they returned several rows as output. This fix will decrease the possibility of occurance of this bug. But it is still possible due to gpperfmon harvest mode.
-
由 Adam Lee 提交于
1, pass external table encoding to copy's options, then set cstate->file_encoding to it, for reading and writing. 2, after the merge, copy state doesn't have a member of client encoding, which used to set to the target encoding, get the converted data as a client, now passes the file encoding (from copy options) to convert directly.
-
由 Adam Lee 提交于
gppc.c: In function ‘TFGetFuncExpr’: gppc.c:1255:3: error: implicit declaration of function ‘exprType’ [-Werror=implicit-function-declaration] exprType(list_nth(fexpr->args, argno)) != typid) ^~~~~~~~
-
- 09 6月, 2018 3 次提交
-
-
由 Andreas Scherbaum 提交于
* Add start_ignore and end_ignore around all gp_inject_fault loads
-
由 Ashwin Agrawal 提交于
-
由 Lisa Owen 提交于
-
- 08 6月, 2018 16 次提交
-
-
由 Tom Lane 提交于
This commit pulls in the latest tzdata from Postgres 11. We intentionally left out comment changes to `src/backend/utils/adt/datetime.c` because it's not applicable (yet). > DST law changes in North Korea. Redefinition of "daylight savings" in > Ireland, as well as for some past years in Namibia and Czechoslovakia. > Additional historical corrections for Czechoslovakia. > > With this change, the IANA database models Irish timekeeping as following > "standard time" in summer, and "daylight savings" in winter, so that the > daylight savings offset is one hour behind standard time not one hour > ahead. This does not change their UTC offset (+1:00 in summer, 0:00 in > winter) nor their timezone abbreviations (IST in summer, GMT in winter), > though now "IST" is more correctly read as "Irish Standard Time" not "Irish > Summer Time". However, the "is_dst" column in the pg_timezone_names view > will now be true in winter and false in summer for the Europe/Dublin zone. > > Similar changes were made for Namibia between 1994 and 2017, and for > Czechoslovakia between 1946 and 1947. > > So far as I can find, no Postgres internal logic cares about which way > tm_isdst is reported; in particular, since commit b2cbced9 we do not > rely on it to decide how to interpret ambiguous timestamps during DST > transitions. So I don't think this change will affect any Postgres > behavior other than the timezone-view outputs. > > Discussion: https://postgr.es/m/30996.1525445902@sss.pgh.pa.us (cherry picked from commit 234bb985) Co-authored-by: NJesse Zhang <sbjesse@gmail.com> Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Tom Lane 提交于
The non-cosmetic changes involve teaching the "zic" tzdata compiler about negative DST. While I'm not currently intending that we start using negative-DST data right away, it seems possible that somebody would try to use our copy of zic with bleeding-edge IANA data. So we'd better be out in front of this change code-wise, even though it doesn't matter for the data file we're shipping. Discussion: https://postgr.es/m/30996.1525445902@sss.pgh.pa.us (cherry picked from commit b45f6613)
-
由 Jesse Zhang 提交于
This should have been part of commit f590dc94 but we forgot. Now remove them for good. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Scott Kahler 提交于
-
由 David Yozie 提交于
-
由 Ning Yu 提交于
`SHOW memory_spill_ratio` will always display 20 when it's the first query in a connection (if you run this query in psql and pressed TAB when entering the command then the implicit queries ran by the tab completion function will be the first), the root cause is that SHOW command will be bypassed in resgroup, so the bound resgroup will not be assigned, and the resgroup's settings will not be loaded. To display the proper value in this case we will also load the resgroup settings even for bypassed queries.
-
由 Ashwin Agrawal 提交于
Before: qp_functions ... ok (76.24 sec) (diff:0.06 sec) qp_gist_indexes4 ... ok (88.46 sec) (diff:0.07 sec) qp_with_clause ... ok (130.70 sec) (diff:0.32 sec) After: qp_functions ... ok (4.49 sec) (diff:0.06 sec) qp_gist_indexes4 ... ok (16.18 sec) (diff:0.06 sec) qp_with_clause ... ok (54.41 sec) (diff:0.30 sec)
-
由 Lisa Owen 提交于
* docs - misc updates to gptransfer - conref from best practices to admin guide - qualify use for migration to diff number of segments - misc edits * conditionalize
-
由 David Yozie 提交于
* docs -draft update for migrating with gpcopy * docs - migrating w/ gpcopy - updated ditamap * add 'pivotal' condition * some edits, additions, reorg * removing ssh as a requirement (per comment on related PR) * Clarifying that same-cluster copies aren't supported * 4.3.16 -> 4.3.26 * replace --truncate with --truncate-source-after * add note to start clusters in restricted mode * add note about md5 validation when migrating between major version * add pg_dump, pg_dumpall, psql dependencies in prerequisites section * add info about actual free space needed for migration * updating discussion of free space calculation * updating to use new gpcopy option names * more updates to clarify migration proc * removing contentious statement about indexing (not present in main docs); adding resource groups to list of objects copied * removing note about md5xor not working for version migration * add note regarding validation with --append * add note to install client package to get gpcopy dependencies * removed --no-compression info as that's the default for same-host copies; relocated basic migration instructions * replacing error section with reviewed info from ref page * updating migration topics to use procedure format * adding some post-migration steps for gpcopy * address more feedback from brian; reorganize migration steps a bit; add general post-migration instructions from the relnotes
-
由 Lisa Owen 提交于
-
由 Taylor Vesely 提交于
The return value of tzparse() has changed as of commit b749790a, but the corresponding change to the tzparse() function in pg_load_tz() was never made. As a result, under certain circumstances the server might pick a bogus timezone.
-
由 Bhuvnesh Chaudhary 提交于
For semi join queries if the constraints can eliminate the scanned relations, the resulting relation should be marked as a dummy and the join using it should be a dummy join.
-
由 mkiyama 提交于
-
由 David Yozie 提交于
-
由 Chris Hajas 提交于
The only change in the tarball is that we have removed the libraries and header files for ddboost. These have already been removed for centos for some time. Co-authored-by: NChris Hajas <chajas@pivotal.io> Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io>
-
- 07 6月, 2018 9 次提交
-
-
由 Shoaib Lari 提交于
We have added a unit test for gpaddmirrors when the mirror data directories are provided interactively. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io>
-
由 Jialun 提交于
Cpu usage of cpuset groups should also be displayed in gp_toolkit.gp_resgroup_status.
-
由 Pengzhou Tang 提交于
Previously, for an interconnect connection, if no data are available at sender peer, the sender sends a customized EOS packet to the receiver and disables further send operations using shutdown(SHUT_WR), then somehow, the sender closes the connection totally with close() immediately and it counts on the kernel and TCP stack to guarantee the data been transformed to the receiver. The problem is, on some platform, if the connection is closed on one side, the TCP behave is undetermined, the packets may be lost and receiver may report an unexpected error. The correct way is sender blocks on the connection until receiver getting the EOS packet and close its peer, then the sender can close the connection safely.
-
由 Pengzhou Tang 提交于
For a result node with one-time filter, if it's outer plan is not empty and contains a motion node, then it needs to squelch the outer node explicitly if the one-time filter check is false. This is necessary espically for motion node under it, ExecSquelchNode() force a stop message so the interconnect sender don't stuck at recending or polling ACKs.
-
由 Pengzhou Tang 提交于
This is a quick fix to make dispatch test pass, for a long term, we need to redesign the dispatch test or make it a unit test.
-
由 Xiaoran Wang 提交于
* upgrade pgbouncer to 1.8.1 * support PAM/HBA auth type * update submodule pgbouncer commit * update pgbouncer's commit to support SSL connection -change pgbouncer server_tls_ciphers default value
-
由 Bhuvnesh Chaudhary 提交于
-
由 Omer Arap 提交于
-
由 Mel Kiyama 提交于
* docs - add gpcopy utility * docs - gpcopy - review comment updates. * docs - gpcopy reference - review comment updates and edits. Also --changed --dest-host to be a required option * docs - gpcopy reference - command option changes. -schema-only change to --metadata-only --database change to --dbname --batch-size change to --jobs * docs - gpcopy ref. fix typos
-
- 06 6月, 2018 8 次提交
-
-
由 Lisa Owen 提交于
* docs - discuss the partner connector (gppc) api * address most of the edits requested by david * add to requirements * add the memory context functions
-
由 anki-code 提交于
-
由 Ashwin Agrawal 提交于
-
由 Pengzhou Tang 提交于
Dispatch tests don't expect backends created by other tests or auxiliary processes like FTS and GDD, this commit disables GDD too to make dispatch tests stable.
-
由 Jialun 提交于
- Change strncpy to StrNCpy, make sure dest string be terminated - Initilize some variables before use it.
-
由 Jesse Zhang 提交于
Commit 1c1945fd9dbaf217062596062f73beac4934d7b6 broke compilation when we use the trivial / dummy implementation of resource group. The fix for that is trivial (this commit). But it begs the question: should we make the build system less magical (switching the implementation based on the platform), and instead just always exercise the dummy implementation (or at least the building of it).
-
由 Ashwin Agrawal 提交于
Previous algorithm scans entire directory to find specific relfilenode extensions to be deleted. This is not optimal for large directory sizes. This patch introduces extra logic based on the table extension pattern which helps to avoid directory scan. Algorithm is coded based on assumption that for CO tables a given concurrency level either all columns have the file or none as well as the following file table extension pattern: Heap Tables: contiguous extensions, no upper bound AO Tables: non contiguous extensions [.0 - .127] CO Tables: non contiguous extensions [ .0 - .127] for first column [.128 - .255] for second column [.256 - .283] for third column etc AO file format can be treated as a special case of CO tables with 1 column. High level logic: 1) Finds for which concurrency levels the table has files. This is calculated based off the first column. It performs 127 (MAX_AOREL_CONCURRENCY) unlink(). 2) Iterates over the single column and deletes all concurrency level files. For AO tables this will exit fast. This algorithm can be used for heap tables as well, however to prevent merge conflicts it currently is only used for CO/AO tables. Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Ashwin Agrawal 提交于
Without this patch the strorage layout is not known in md and smgr layer. Due to lack of this info sub-optimal operations need to be performed generically for all table types. For example Heap specific functions like ForgetRelationFsyncRequests(), DropRelFileNodeBuffers() gets called even for AO and CO tables. Adding new RelFileNodeWithStorageType struct to carry pass storage type to md and smgr layer. XLOG_XACT_COMMIT and XLOG_XACT_ABORT wal records use the new structure which has RelFileNode and storage type Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-