- 19 2月, 2019 1 次提交
-
-
由 Jialun 提交于
gpexpand generate template from master by copying the master dir directly before, it may be unsafe. Though we lock the catalog, there may be some other on-disk changes. So we use pg_basebackup instead of native copy.
-
- 18 2月, 2019 3 次提交
-
-
由 Jinbao Chen 提交于
Because order-set aggs always have nonempty aggorder, numPureOrderedAggs currently contains WITHIN GROUP aggs. I think this is reasonable. The reason why we add numPureOrderedAggs in AggClauseCosts is that the group aggregate cost is much higher than hash aggregate, and we usually use hash aggregate with DISTINCT and group aggregate with ORDER BY. With within group, we must also use the group aggregate. So, we need to add numPureOrderedAggs when the query cantains a within group agg.
-
由 Richard Guo 提交于
Previously we checked whether OldestXmin is valid in heap_page_prune_opt and exited early if not so. This is mainly because in the case of persistent tables, GPDB may call into here without having a local snapshot and thus no valid OldestXmin. Since now we do not support persistent tables any more, revert this check to an assertion to keep the same with upstream. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Teng zhang 提交于
-
- 16 2月, 2019 21 次提交
-
-
由 Mel Kiyama 提交于
-
由 Mel Kiyama 提交于
* docs - support for special characters in schema/table names for --include-table option. * docs - support for special characters in schema/table names for --include-table-file option. * docs - remove misplaced word "support"
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Adam Berlin 提交于
These only run if OpenSSL is enabled. Co-authored-by: NJacob Champion <pchampion@pivotal.io> Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Adam Berlin 提交于
To fix the SSL TAP tests, revert most of the implementation to upstream 9.6 (commit 41740b9ef, right before a major refactor to ServerSetup.pm). This removes the GPDB-specific implementations of the test node setup, which weren't really setting up a new node anyway; they were modifying whatever GPDB cluster they found in the environment. Since the SSL tests don't need a full cluster to run, we can stop carrying that diff. Notable remaining differences from 9.6 include - the commenting out of the wal_retrieve_retry_interval GUC, which is not yet supported in GPDB - the addition of GPDB-specific options to `pg_ctl start` to create a standalone segment, and the use of utility mode to connect to it - the use of note() instead of diag() in the test suite, for cosmetic reasons (we can probably remove that diff once we catch up to 9.6) - the continued commenting-out of SAN tests, which we plan to reinstate in a future commit Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
由 Jimmy Yih 提交于
When running gprecoverseg, gpinitstandby, or gpaddmirrors, we actually run gpconfigurenewsegment to execute pg_basebackup. Log the progress output of pg_basebackup to a temporary file for user and/or utility consumption. The pg_basebackup progress output is logged to a temporary file located in ~/gpAdminLogs or wherever the user specified with -l flag used by most Greenplum Python utilities. The file is removed after a successful run. The pg_basebackup output contains carriage returns so the user must deal with it themselves through their editor of choice. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io> Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
由 Jimmy Yih 提交于
The gpconfigurenewsegment logging would always go to the default location "~/gpAdminLogs/" while the callers of gpconfigurenewsegment could have been logging to a different location. Make this consistent. Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
由 Mark Sliva 提交于
This follows up the addition of a start time during pg_basebackup. In the test, we added additional power to the execSQL mock so that queries to pg_stat_replication and pg_stat_activity can both be tested simultaneously. Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
由 Mark Sliva 提交于
Since this is a warning, the stack trace is excessive. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Alexandra Wang 提交于
Mirror should PANIC immediately when recovery is in consistent state but there are existing invalid page entries in invalid_page hash table. This makes sure to not delay the checking till mirror is promoted and helps catching the file missing problems on mirror sooner. The logic of mirror immediate PANIC was introduced in upstream commit 1e616f63. Since, no test exist for this in upstream and logic is used by AO tables as well, this patch is adding test to validate the stated behavior. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Ashwin Agrawal 提交于
This is follow-up commit to 69ebd66c, incorporating Asim's review feedback.
-
由 Ashwin Agrawal 提交于
This is to avoid marking primary status down during this test, while it restarts the primary.
-
由 Jacob Champion 提交于
gpstop is the only consumer of this one-off helper, so move it to gpstop. The recently added WorkerPool properties make this possible. Also remove the requirement for callers to keep track of how many commands have been added to the pool, similarly to what we did for wait_and_printdots(). Additionally, fix some unit test bugs, where the assertions on mocks weren't actually testing anything, by properly speccing the Mock objects themselves.
-
由 Jacob Champion 提交于
WorkerPool.completed now tells you how many commands are currently in the completed queue; .assigned tells you how many commands are either pending or completed. The latter property replaces the previous .num_assigned attribute, which was not correctly updated when the completed queue was emptied.
-
由 Jacob Champion 提交于
base.join_and_indicate_progress() waits for the pool to complete its work, printing indication dots to stdout once per second. If it takes less than a second for the pool to complete, we won't print anything (and we also won't hang for a second waiting for nothing to happen). The previous implementation required the caller to store a running tally of how many commands had been added to the pool; that requirement is now dropped. Unlike wait_and_printdots(), join_and_indicate_progress() *always* prints to file. Don't call it if you don't want to print; use WorkerPool.join() directly.
-
由 Jacob Champion 提交于
The current status reporting methods are difficult to test (they try to do a little too much, IMO). Introduce a simpler solution -- allow join() to accept a timeout. All status reporting can now be implemented using this primitive.
-
由 Jacob Champion 提交于
WorkerPool needs some help, and we need some test coverage before I can fix and refactor.
-
由 Adam Berlin 提交于
This reverts commit 848733b6.
-
- 15 2月, 2019 8 次提交
-
-
由 Ning Yu 提交于
A duration can be set on gpexpand phase2, then it can quit before redistributed all the tables. There is behave tests to verify this, however they should check whether gpexpand quit on time.
-
由 Daniel Gustafsson 提交于
The :else clause on the for loop is superfluous as the loop doesn't contain any break statement. Removing it will yield the same codepath but makes for improved readability. This also removes an unused import (time) as well as fixes a set of typos. Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
由 Paul Guo 提交于
Also refactor subquery_motionHazard_walker() to make it simpler.
-
由 Ning Yu 提交于
Unless cancelled with ctrl-c the gpexpand redistribution phase's return code is always 0, it does not indicate whether the redistribution succeed or not, to get this information we need to check the status from gpexpand.status.
-
由 Ning Yu 提交于
Most temp tables won't live for long, there is no need to redistribute them. On the other hand if they are populated in gpexpand.status_detail and disappeared before redistribution, an error is reported to the user, this just causes unnecessary panic.
-
由 Taylor Vesely 提交于
Pull from upstream Postgres to make DefineIndex recursively create partitioned indexes. Instead of creating an individual IndexStmt for every partition, create indexes by recursing on the partition children. This aligns index creation with upstream in preparation for adding INTERNAL_AUTO relationships between partition indexes. * The QD will now choose the same name for partition indexes as Postgres. * Update tests to reflect the partition index names changes. * The changes to DefineIndex are mostly cherry-picked from Postgres commit: 8b08f7d4 * transformIndexStmt and its callers have been aligned with Postgres REL9_4_STABLE Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Jesse Zhang 提交于
This reverts the following commits: commit 0ee987e64 - "Don't dispatch index creations too eagerly in ALTER TABLE." commit 28dd0152 - "Enable alter table column with index (#6286)" The motivation of commit 0ee987e64 is to stop eager dispatch of index creation during ALTER TABLE, and instead perform a single dispatch. Doing so prevents index name already exists errors when altering data types on indexed columns such as: ALTER TABLE foo ALTER COLUMN test TYPE integer; ERROR: relation "foo_test_key" already exists Unfortunately, without eager dispatch of index creation the QEs can choose a different name for a relation than was chosen on the QD. Eager dispatch was the only mechanism we had to ensure a deterministic and consistent index name between the QE and QD in some scenarios. In the absence of another mechanism we must revert this commit. This commit also rolls back commit 28dd0125 to enable altering data types on indexed columns, which required commit 0ee987e64. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NDavid Krieger <dkrieger@pivotal.io>
-
由 Ning Yu 提交于
UDPIFC receives data in a rx thread, errors in this thread cannot be raised directly, they are recorded in memory and the main thread is responsible to raise it at some proper time. It is possible for the rx thread to record an error after last TEARDOWN, technically it should be counted for last query, but there was a bug that the main thread raised it in next SETUP, so the new query failed immediately due to out-of-date errors. This is fixed by discarding any rx thread errors at beginning of SETUP.
-
- 14 2月, 2019 7 次提交
-
-
由 Georgios Kokolatos 提交于
Upstream added support for multiple kinds of external toast datums (commit 36820250). In the heart of the implementation, the representation of the inline portion of a short varlena was modified to store a tag describing the location of the datum pointed to, namely memory or disk. The tag took the place of the member which was describing the length of the datum pointed to. Since in upstream the length, for on disk datums, was always set to (VARHDRSZ_EXTERNAL + sizeof(struct varatt_external)), the enum for the corresponding tag was chosen to be exactly that, i.e. 18. In Greenplum, the exact same thing happens. However, due to historic reasons, there is an additional two byte padding in the struct. That representation has (and still is) been reflected in VARHDRSZ_EXTERNAL, which means that Greenplum has been storing a value for length two bytes longer than upstream. This commit updates the value of VARTAG_ONDISK to match the value that Greenplum has been historically storing. The other solution would have been to re-write the data during upgrade, but it seems unreasonably risky and evasive to do so. This commit also removes a comment that stated that Greenplum did not always set the length of the datum pointed to. Extensive search in the latest 5 and 4 versions of Greenplum did not found this to be true anymore. If it were, then some data rewriting would have been required while upgrading Greenplum. Previous versions of Greenplum, (pre-2009) have not been checked and it should be considered that data from those versions will break if binary upgraded to the current version. Removes a GPDB_94_MERGE_FIXME. Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Paul Guo 提交于
After we have parameterized path since pg 9.2 and lateral (since pg9.3 although we do not support the full functionality), merge join path and hash join path need to consider that. Besides, for nestloop path, the previous code is wrong. 1. It did not allow motion for paths include index (path_contains_inner_index()). That is wrong. Here are two examples of index paths which allow motion. -> Broadcast Motion 3:3 (slice1; segments: 3) (cost=0.17..24735.67 rows=86100 width=0) -> Index Only Scan using t2i on t2 (cost=0.17..21291.67 rows=28700 width=0) -> Broadcast Motion 1:3 (slice1; segments: 1) (cost=0.17..6205.12 rows=259 width=8) -> Index Scan using t2i on t2 (cost=0.17..6201.67 rows=29 width=8) Index Cond: (4 = a) 2. The inner path and outer path might require upper nodes for parameterized paths so current code code bms_overlap(inner_req_outer, outer_path->parent->relids) is definitely not sufficient, besides, outer_path could have paramterized paths also. For nestloop join, case 1 is covered by the test case added in join_gp. For case 2, the test case in join.sql (although ignored) in this patch actually partially tested. Note the change in this patch is conservative. In theory, we could refer subplan code to allow broadcast for base rel if needed (for this solution no motion is needed), but that needs much effort and does not seem to be deserved given we will probably refactor related code for the lateral support in the near future.
-
由 Richard Guo 提交于
This removes several places of GPDB_94_MERGE_FIXME.
-
由 Richard Guo 提交于
Currently we conduct non-equivalent class deduction only from quals of inner join. This patch avoids adding outer join quals to non_eq_clauses from the beginning. Reviewed-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Sambitesh Dash 提交于
Co-authored-by: NSambitesh Dash <sdash@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 Sambitesh Dash 提交于
We now pull down the libsigar artifacts (.targz) from GCS for Centos instead of ivy. After removing all the ivy dependencies for Centos{6,7}, `make sync_tools` no longer creates {GPDB_SRC}/gpAux/ext/rhel{6,7}_x86_64 in the concourse instance. This commit creates the necessary directories for Centos{6,7} platform. Co-authored-by: NNandish Jayaram <njayaram@pivotal.io> Co-authored-by: NSambitesh Dash <sdash@pivotal.io> Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Karen Huddleston 提交于
These should have been removed when the dependencies were shifted from Ivy to the operating system in commit 07175e06, but we missed these. Note that that the copylibs target was copying the libraries from the system into gpdb binaries, but that was wrong so we stopped it. Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
-