- 09 11月, 2018 7 次提交
-
-
由 Jimmy Yih 提交于
Like pg_basebackup, we should generate recovery.conf file at the end of pg_rewind so that utilities do not have to take care of that step. This is required in Greenplum mainly because users will not use pg_rewind manually. Note that we only autogenerate recovery.conf file if pg_rewind is called with source server because we utilize the libpq connection information. We expect pg_rewind usage to only be through gprecoverseg. Added TODO message to create common library between pg_basebackup and pg_rewind to create the recovery.conf file since most of this code addition is copied from pg_basebackup.c file (there are a couple diffs to make it work for pg_rewind). Co-authored-by: NPaul Guo <pguo@pivotal.io>
-
由 Jimmy Yih 提交于
The old master can be slow to restart its WAL stream with promoted standby as part of automatic timeline switch for the new timeline after finishing old timeline catchup. The SELECT query on pg_stat_replication was executed too fast and the entry would show state STARTUP which is the default state for timeline switching. To make the test more deterministic, loop the check on pg_stat_replication for STREAMING state with timeout 30 seconds.
-
由 Heikki Linnakangas 提交于
This is very similar to the "WAL consistency checking" facility added in PostgreSQL v10. But we don't have that yet, so this will do as a stopgap until we catch up. With this patch, a full image of each page by the targeted WAL records are dumped to a file in the data directory, "bmdump_<segno>_insert". And at replay, the image after replaying each WAL record is dumped to "bmdump_<segno>_redo". You can compare the files with each other, to verify that the WAL redo faithfully reproduces the same page contents as in normal operation. This is debugging code that is normally disabled, and needs to be enabled by the developer if needed.
-
由 Heikki Linnakangas 提交于
I botched the WAL-logging of the content words in the _bitmap_write_bitmapwords_on_page function. In The start position for changed content words is kept in 'cwords' variable, not 'startWordNo'. 'startWordNo' is the starting position in the input buffer to copy from, not the position in the target page to copy to. Move the lines that record this information in the WAL record closer to the corresponding memcpy()s that make the changes in the master. This makes it easier to verify that we're recording the same changes in the WAL record that we are making to the page. I found this by running a patched version that wrote a full page image after writing or replaying each XLOG_BITMAP_INSERT_WORDS record, with the 'bitmap_index' regression test. This hopefully explains the assertion failure that Ashwin reported at https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/TkoXWveDS6g although I was not able to reproduce that.
-
由 Mel Kiyama 提交于
This will be backported to 5X_STABLE
-
由 Mel Kiyama 提交于
-
由 David Krieger 提交于
We add a 'make perfcheck' target that skips the time-consuming but not functionally necessary parts of this version of upgrade in order to speed up performance testing of data transfer and data scaling. We also always print out basic run timing for reference. Co-authored-by: NMelanie Plageman <mplageman@pivotal.io> Co-authored-by: NAdam Berlin <aberlin@pivotal.io>
-
- 08 11月, 2018 13 次提交
-
-
由 Heikki Linnakangas 提交于
There were a couple of issues with the old code: 1. It was extending the relation, and doing palloc's, while in critical section. Those can fail, if you run out of disk space or memory, which would lead to a PANIC. Out of disk space could be rather nasty, because after WAL replay, we would try to finish the incomplete insertion, which would be quite likely to run out of disk space again. 2. The "incomplete actions" mechanism, including the bm_safe_restartpoint() rmgr API function, went away in PostgreSQL 9.4. Now that we've merged with 9.4, we need to deal with them differently. After this patch, the insertion is performed in one atomic operation, with a single WAL record. The XLOG_BITMAP_INSERT_WORDS WAL record format is changed, so that it can represent the insertion on several bitmap pages in one record. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NBaiShaoqi <sbai@pivotal.io>
-
由 Heikki Linnakangas 提交于
Create the CdbHash object in the initialization phase, and reuse it for all the tuples. This makes the same change for both ReshuffleExpr, and Reshuffle plan node. I'm not sure how much performance difference this makes, but it seems cleaner anyway. In the passing, copy-edit some comments and indentation. Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
-
由 Zhenghua Lyu 提交于
Commit 6dd2759a does a nice clean on SingleQE's segid but forgot to remove this global variable. Remove it here.
-
由 Daniel Gustafsson 提交于
This utility was using the hw.physmem sysctl which is undocumented on macOS and most likely is a remnant from the FreeBSD origins of macOS. The supported way is to use hw.memsize.
-
由 Heikki Linnakangas 提交于
The definition of cdb_randint() was pretty hard to understand. Where did the 0.999999 constant come from, for example? And the signature of cdb_randint() was also surprising, with the upper bound as the first argument, and lower bound second. The call in makeRandomSegMap() got that backwards, although it still mostly worked. Replace cdb_randint() with a more straightforward cdbhashrandomseg(int numsegments) function. Also simplify the 'rrindex' mechanism used in cdbhash.c, to choose a segment in random. The comment in makeCdbHash() that claimed that the calling cdbhashnokey() repeatedly would behave in a round-robin fashion was wrong: the 'rrindex' counter that was incremented on every call was fed to a hash function, which meant that we were using the hash function as a random number generator. Since we were using it as a random number generator anyway, remove the 'rrindex' field and call random() directly in cdbhashnokey(). Fixes https://github.com/greenplum-db/gpdb/issues/5899. Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
-
由 Jimmy Yih 提交于
All of the pg_rewind tests currently only test scenarios where pg_rewind finds divergence and rewinds the target. This test being added will test the scenario where pg_rewind decides that no rewind is required on the target and that the target will be able to catch up automatically to the source server's new timeline. Co-authored-by: NPaul Guo <pguo@pivotal.io>
-
由 Jimmy Yih 提交于
When recovery_target_timeline setting is set in recovery.conf, we expect the recoveryTargetTLI variable to change (e.g. when set to 'latest', it will read the primary's history file to get latest TLI). This logic seems to have been removed from the bottom of readRecoveryCommandFile() when WAL replication was backported for master/standby replication. Co-authored-by: NPaul Guo <pguo@pivotal.io>
-
由 Jimmy Yih 提交于
When WalRcv's walRcvState is not WALRCV_STOPPED, we do early exit. This early exit is not in upstream Postgres and is legacy Greenplum code. This prevented timeline switch scenario from happening correctly because WAL receiver's state is set to WALRCV_WAITING expecting to wake itself up with SetLatch or PMSIGNAL_START_WALRECEIVER signal call at the end of RequestXLogStreaming() call. Co-authored-by: NPaul Guo <pguo@pivotal.io>
-
由 Jimmy Yih 提交于
At the beginning of pg_rewind, we should launch a quick single-user mode postgres session to the target instance to ensure the target has completed crash recovery and logs a clean shutdown in its pg_control file. Co-authored-by: NPaul Guo <pguo@pivotal.io>
-
由 Kevin Yeap 提交于
To create or alter an EXTERNAL TABLE, the syntax requires e.g. CREATE EXTERNAL TABLE foo ... or ALTER EXTERNAL TABLE ... But the syntax for a comment on an EXTERNAL TABLE is just: COMMENT ON TABLE foo ... Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io> Co-authored-by: NKevin Yeap <kyeap@pivotal.io>
-
由 Ekta Khanna 提交于
Prior to this commit, the fault status checked by QD could be too fast. The QE reader executing `DECLARE CURSOR` statement may not have hit the fault under test by the time QD checked its status. This commit updates the test to use `gp_wait_until_triggered_fault()` interface to make it more deterministic. The issue was discovered as part of concourse upgrade. Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
由 Alexandra Wang 提交于
The sqldump that is generated from the icw_gporca_centos6 job can't be fully loaded into the test cluster for the pg_upgrade and gpexpand jobs because those clusters are missing some required libraries and extensions which would have been installed during ICW. However all of the missing pieces are not compiled objects that we ship. We do not expect a sqldump file to be restore these compiled objects. Therefore we are going to drop the objects from the database that depend on bits we don' ship. We are dropping any function that is not installed int $GPHOME/lib/posgresql ($libdir) and we are explicitly dropping functions that depend on: + $libdir/gp_replica_check + $libdir/gpformatter.so + $libdir/gpextprotocol.so + $libdir/gpcc_test + $libdir/gpcc_demo + $libdir/tabfunc_gppc_demo This does not mean that we aren't supporting upgrading this class of objects, note that they are still upgraded as a part of the smoke test at the end of ICW. We also are leaving in many functions that depend on bits we ship, so downstream consumers of this data set are not loosing a class of objects to upgrade. Also note these destructive changes to the database will only be applied after ICW succeeds and so shouldn't hamper debugging ICW in the future. Co-authored-by: NAlexandra Wang <lewang@pivotal.io> Co-authored-by: NJim Doty <jdoty@pivotal.io>
-
由 Jacob Champion 提交于
The demoprot_untrusted protocol was already being dropped because of a sporadic sort order change after upgrade. The demoprot_untrusted2 protocol, added recently in d4d4a16c, is failing now, and should be dropped for the same reason. It looks like a sort bug in pg_dump prevented this from showing up until now; b5078cc7 fixed that bug and exposed this problem again.
-
- 07 11月, 2018 3 次提交
-
-
由 Daniel Gustafsson 提交于
The DumpableObjectType enum and newObjectTypePriority array must be kept in perfect sync since the array index is mimicking the enum key wrt lookups. The Greenplum specific options had been placed last to avoid merge conflicts, but thats not a re-ordering we can do since it breaks the synchronization. Re-order the array, and also fix the sort orders and remove the FIXME (placing DO_TYPE_STORAGE_OPTIONS before the PRE_DATA_BOUNDARY to ensure it's in the right section). Reviewed-by: NAsim R P <apraveen@pivotal.io>
-
由 Heikki Linnakangas 提交于
This affects the syntax summary printed by psql \h command.
-
由 ZhangJackey 提交于
Now we have partial tables and flexible GANG API, so we can allocate GANG according to numsegments. With the commit 4eb65a53, GPDB supports table distributed on partial segments, and with the series of commits (a3ddac06, 576690f2), GPDB supports flexible gang API. Now it is a good time to combine both the new features. The goal is that creating gang only on the necessary segments for each slice. This commit also improves singleQE gang scheduling and does some code clean work. However, if ORCA is enabled, the behavior is just like before. The outline of this commit is: * Modify the FillSliceGangInfo API so that gang_size is truly flexible. * Remove numOutputSegs and outputSegIdx fields in motion node. Add a new field isBroadcast to mark if the motion is a broadcast motion. * Remove the global variable gp_singleton_segindex and make singleQE segment_id randomly(by gp_sess_id). * Remove the field numGangMembersToBeActive in Slice because it is now exactly slice->gangsize. * Modify the message printed if the GUC Test_print_direct_dispatch_info is set. * Explicitly BEGIN create a full gang now. * format and remove destSegIndex * The isReshuffle flag in ModifyTable is useless, because it only is used when we want to insert tuple to the segment which is out the range of the numsegments. Co-authored-by: Zhenghua Lyu zlv@pivotal.io
-
- 06 11月, 2018 6 次提交
-
-
由 Abhijit Subramanya 提交于
Add a test for nullif expression to make sure that the ORCA translators are working as expected. Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Lisa Owen 提交于
-
由 Heikki Linnakangas 提交于
Avoids looking through domains, array types, etc. on every call. That seems like a more sensible API, since the data types don't change during the lifetime of a CdbHash. Make cdbhash() more convenient for callers, by handling NULLs within the function. This way the callers don't need to do the NULL check and call either cdbhash() or cdbhashnull(). This also fixes the performance issue caused by the syscache lookups reported in https://github.com/greenplum-db/gpdb/issues/5961. The type's type is now checked only once, when the CdbHash object is initialized, instead of every row. Reviewed-by: NMelanie Plageman <mplageman@pivotal.io> Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
-
由 Mel Kiyama 提交于
* docs - CREATE TABLESPACE command --removed filespace information --added per segment location syntax --added function gp_tablespace_location() GPDB 6.0 ONLY. NOTE: Does not include topics in Admin Guide. Assigning a tablespace for temp files currently does not work. * docs - removed references to filespace. * Small line edit * Line edit * line edit * correct my own typo * PostgreSQL -> Greenplum Database * line edits
-
由 David Yozie 提交于
* Update gp_create_table_random_default_distribution to describe new 6.x rules * Update from Daniel * Using same wording for behavior in CREATE TABLE
-
由 Adam Berlin 提交于
When there are many concurrent operations on AO tables, it is possible the aoentry fetched during RegisterSegnoForCompactionDrop is a brand new entry that does not contain information about the current vacuum of the relation. In this case, compactedSegmentFileList contains the accurate list of segment files that have been compacted. Remove the elogif that assumes the aoentry is accurate. Note: The aoentry will not be evicted again after RegisterSegnoForCompactionDrop because the entry is marked as 'in use'. Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
- 05 11月, 2018 10 次提交
-
-
由 Heikki Linnakangas 提交于
We had duplicated code in a few places, to reconstruct a DistributedBy clause from policy of an existing relation. Use the existing function to do that. Rename the function to make_distributedby_for_rel(). That's a more descriptive name. Reviewed-by: NNing Yu <nyu@pivotal.io>
-
由 Heikki Linnakangas 提交于
loci_compatible() performs a more relaxed check than equal(). Doing the more stringent equal() check first is a waste of time.
-
由 Heikki Linnakangas 提交于
All callers of cdbpathlocus_compare were asking for strict equality check.
-
由 Heikki Linnakangas 提交于
As far as I can tell, GPDB works the same as PostgreSQL with regards to path keys used for append rels, so I don't see why we'd need to do any transformation here. Regression tests are passing without it. This code has been moved around as part of the 9.2 merge, and some other cleanup, but goes all the way back to 2007 in the old pre-open sourcing repository. The commit that introduced was a massive commit with message "Merge of Release-3_1_0_0-alpha1-branch branch down to HEAD", so I lost the trace of its origin there. I guess it was needed back then, but seems unnecessary now.
-
由 Heikki Linnakangas 提交于
Notes in testcase about backslash escaping: - Need to add ESCAPE 'OFF' to COPY ... PROGRAM - echo will behaves differently on different platforms, force to use bash shell with -E option. Signed-off-by: NMing LI <liming01@gmail.com>
-
由 Ming LI 提交于
1) Fixes github issue https://github.com/greenplum-db/gpdb/issues/5925: If environment variable value contains single quote, it will report error: ``` ERROR: external table env command ended with error. sh: -c: line 0: unexpected EOF while looking for matching `'' (seg0 slice1 172.31.81.199:6000 pid=7192) DETAIL: sh: -c: line 1: syntax error: unexpected end of file ``` The external program executed with COPY PROGRAM or EXECUTE-type external table is passed a bunch of environment variables. They are passed by adding them to the command line of the program being executed, with "<var>=<value> && export VAR && ...". However, the quoting in the code that builds that command line was broken. Fix it, and add a test. 2) It also fixed: a backslash should not be escaped by duplicating the backslash. Using single quote as shell quote, only need to escape ' to '\'', no need to escape backslash. Most escaping problem occurs during display the value. Notes in testcase about backslash escaping: - Need to add ESCAPE 'OFF' to EXTERNAL WEB TABLE - Need to add ESCAPE '&' for LIKE predicate - For shell 'env' output, don't seperate to 2 columns because CI env has funny char in variable value. e.g. "LS_OPTIONS=-N --color=tty -T 0" and "LESSOPEN=||/usr/bin/lesspipe.sh %s" - echo will behaves differently on different platforms, force to use bash shell with -E option.
-
由 Ming Li 提交于
Signed-off-by: NTingfang Bao <bbao@pivotal.io>
-
由 BaiShaoqi 提交于
-
由 Daniel Gustafsson 提交于
The duplication arose due to Greenplum backporting a commit which we've now gained via the merge. Remove the hunk which came via the backport to align us more with upstream.
-
由 Heikki Linnakangas 提交于
The check in the parser didn't recurse correctly, and therefore only checked whether the last DISTRIBUTED BY column was the same as any previous one. As long as the last column was unique, duplicates elsewhere in the list were ignored. Reviewed-by: NShaoqi Bai <sbai@pivotal.io>
-
- 03 11月, 2018 1 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
-