- 12 9月, 2018 1 次提交
-
-
由 Chris Hajas 提交于
The DDBoost tests require access to an instance that is currently experiencing network connectivity issues. We're removing these jobs from blocking the release until the networking issues are resolved. Authored-by: NChris Hajas <chajas@pivotal.io>
-
- 11 9月, 2018 13 次提交
-
-
由 Goutam Tadi 提交于
-
由 Joao Pereira 提交于
This reverts commit 67fb52e6. CI was failing and the problem was in a new test created in this commit that was expecting ORCA to do a Table Scan but in 5X with version 2.70.2 it is doing a Seq Scan. This need to be reviewed before it is committed.
-
由 Shujie Zhang 提交于
-
由 Bhuvnesh Chaudhary 提交于
There are plan changes after the commit:9d9b89bc, so updating the output files with the valid plans. It got missed in the earlier commit.
-
由 Dhanashree Kashid 提交于
-
由 Bhuvnesh Chaudhary 提交于
though there was no actual diff in the output. Fix the space so that gpdiff.pl doesn't fail incorrectly. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Chris Hajas 提交于
gpdbrestore was not restoring any ALTER statements related to sequences during filtered restores. Co-authored-by: NKevin Yeap <kyeap@pivotal.io> Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Dhanashree Kashid 提交于
Previously, while optimizing nestloop joins, ORCA always generated a blocking materialize node (cdb_strict=true). Though, this conservative nature ensured that the join node produced by ORCA will always be deadlock safe; we sometimes produced slow running plans. ORCA now has a capability of producing blocking materialize only when needed by detecting motion hazard in the nestloop join. A streaming material will be generated when there is no motion hazard. This commit adds a guc to control this behavior. When set to off, we fallback to old behavior of always producing a blocking materialize. Also bump the statement_mem for a test in segspace. After this change, for the test query, we produce a streaming spool which changes number of operator groups in memory quota calculation and query fails with: `ERROR: insufficient memory reserved for statement`. Bump the statement_mem by 1MB to test the fault injection. Also bump the orca version to 2.72.0 Signed-off-by: NAbhijit Subramanya <asubramanya@pivotal.io> (cherry picked from commit 635c2e0f)
-
由 Goutam Tadi 提交于
Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> Co-authored-by: NFei Yang <fyang@pivotal.io>
-
由 Goutam Tadi 提交于
Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> Co-authored-by: NJemish Patel <jpatel@pivotal.io>
-
由 Goutam Tadi 提交于
[#159742200] Co-authored-by: NXin Zhang <xzhang@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> Co-authored-by: NFei Yang <fyang@pivotal.io>
-
由 Xin Zhang 提交于
Add behave test for FQDN_HBA flag support Co-authored-by: NXin Zhang <xzhang@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io>
-
由 Goutam Tadi 提交于
- Behave tests for gpinitsystem with fqdn Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
- 10 9月, 2018 1 次提交
-
-
由 Pengzhou Tang 提交于
In commit fb9081fc, we introduced a fault injector to drop a stop ack. It released a pthread_mutex_lock by accident which makes interconnect structures in a race condition. As a result, an FATAL error was reported as "FATAL: freelist NULL: count 2 max 1 buf (nil) (ic_udpifc.c:3501)".
-
- 08 9月, 2018 2 次提交
-
-
由 Mel Kiyama 提交于
* docs - ANALYZE - HLL statistics, incremental analyze 5X_STABLE backport --backport of updates in doc PR https://github.com/greenplum-db/gpdb/pull/5696 --changes to about_statistics.xml due to catalog differences. * docs - ANALYZE - HLL statistics, incremental analyze - fix x-refs.
-
由 Jacob Champion 提交于
This reverts commit 6f30f4f8. The tests that should have been running as part of that commit weren't actually added to the isolation2 schedule, so it's not clear that this commit is working as expected in either master or 5X, and this is not ready for public release yet.
-
- 07 9月, 2018 19 次提交
-
-
由 Jacob Champion 提交于
GPDB5 changed relation indexes on disk, and so they are all invalidated when upgrading from 4. Rather than expecting the user to know what to do after that mass-invalidation, write a script to perform a REINDEX DATABASE for every db_name we have, and point the user to it. Co-authored-by: NAsim Praveen <apraveen@pivotal.io> (cherry picked from commit 0f2f52e1)
-
由 Jacob Champion 提交于
8.2->8.3 upgrade of NUMERIC types was implemented for row-oriented AO tables, but not column-oriented. Correct that here. Store upgraded Datum data in a per-DatumStream buffer, to avoid "upgrading" the same data multiple times (multiple tuples may be pointing at the same data buffer, for example with RLE compression). Cache the column's base type in the DatumStreamRead struct. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> (cherry picked from commit 54895f54)
-
由 Jacob Champion 提交于
The values array was being double-freed, causing a crash. (cherry picked from commit 0fd360b3)
-
由 Jacob Champion 提交于
When upgrading to 8.4, upstream's pg_upgrade refuses to copy old sequence data due to a change in columns. However, that check also prevented copying of sequence data during an 8.2->8.3 upgrade -- Postgres doesn't care about this case, but GPDB does. Don't omit the copy if we're upgrading to 8.3, and enable the 8.2 heap upgrade for sequence tables so that the page makes sense in the new database. (cherry picked from commit 907803c0)
-
由 Jacob Champion 提交于
Follow-up to 71916050. Checksums were backported to Postgres 8.3, so we ignore them for <= 8.2. While we're at it, switch the data_checksum_version from a false assignment to a zero assignment, to match upstream. The actual implementation is an unsigned int; Postgres master incorrectly uses a bool type in the declaration. (cherry picked from commit c687f53f)
-
由 Jacob Champion 提交于
Commit 71916050 adds support for adding and removing checksums during pg_upgrade. While we don't want to support this (yet) in 5.x, we do want the bugfixes made to the checksum version checks. Try to match master's output style in the error messages.
-
由 Jacob Champion 提交于
Follow-up to e8de956e. Correct the GPDB 5 check to use Postgres 8.3's version number, and correctly return the cached value from the GPDB 4 check. (cherry picked from commit 86614433)
-
由 Heikki Linnakangas 提交于
* Don't query for external partitions in non-partitioned tables. The query for possible external partitions is fairly expensive, and it's pointless if the table is not partitioned at all. * Cache the results of server version checks. With these improvements, dumping the regression database takes about 25 seconds on my laptop, vs. 70 seconds before. (cherry picked from commit e8de956e)
-
由 Jacob Champion 提交于
By making sure options is always part of the query, we no longer need version-specific variable assignment at all, and we can get rid of quite a bit of duplication. (cherry picked from commit 14444cf06bcd72adc75d8a48e2f6467c83620ce2)
-
由 Jacob Champion 提交于
Whitespace diff only; no other changes. (cherry picked from commit b5b8480923b84c899576a16d3b751dbc575b95c7)
-
由 Jacob Champion 提交于
We want identical external tables to dump identically, whether they are in GPDB 4 or 5, so that we minimize false negatives during pg_upgrade tests: - Only dump OPTIONS if the options exist, regardless of what version we're dumping from. - Always dump a correct ON clause; assume that older versions of location-based web tables are effectively running ON ALL. As part of this, improve the EXTERNAL TABLE dump logic and try to minimize differences between query results for different GPDB versions. Eventually, the query should mask all of those differences by itself. (cherry picked from commit 50160f8b47690f614304825cafab8b42c8518f8c)
-
由 Jacob Champion 提交于
Follow-up to 4f4e5a5c. In GPDB 4, the ON clause information is stored in pg_exttable.location, not .command. (cherry picked from commit d514ddf6)
-
由 Paul Guo 提交于
For pg_upgrade, gpdb runs in GP_ROLE_UTILITY mode and with IsBinaryUpgrade set as true. In this patch we allow some partition related code run if IsBinaryUpgrade is true so that those partition related sql clauses, which are generated by pgdump, could recover previous partition schemas. Co-authored-by: NMax Yang <myang@pivotal.io> (cherry picked from commit ab129d0a)
-
由 Jacob Champion 提交于
Partial backport of commit 18885365, which fixes the partitioning query for GPDB4.
-
由 Daniel Gustafsson 提交于
There are numerous cornercases with restoring indexes on partition hierarchies, a set of which include: * Indexes left on partition members from DROP INDEX commands on the partition parent * Subpartition indexes created with non-standard names * Indexes on partitions which stem from partition exchange Rather than adding code to cover all potential pitfalls, we simply won't allow indexes on partition hierarchies during upgrades, as they can be recreated after the upgrade without data loss. Also add code to the test_gpdb_pre.sql script to drop all such indexes before attempting an upgrade. (cherry picked from commit 404b1993)
-
由 Daniel Gustafsson 提交于
There is no reason to support checking for hash partitions in the new cluster, so make OLD_CLUSTER be the only option to simplify the code a bit. (cherry picked from commit 25c1c959)
-
由 Heikki Linnakangas 提交于
I was about to add this as part of the PostgreSQL 8.4 merge, as a check when upgrading from 8.3 to 8.4, because the hash algorithm was changed in 8.4. However, turns out that pg_dump doesn't support hash partitioned tables at all, so pg_upgrade won't work on a database that contains any hash partitioned tables, even on a same-version upgrade. Hence, let's add this check unconditionally on all server versions. There are comments talking about the hash function change, because of that devleopment history. I think that's useful documentation, just in case we ever start to support hash partitions in pg_dump, so I left it there. (cherry picked from commit 22072ec5)
-
由 Daniel Gustafsson 提交于
Partitioning hierarchies with SERIAL columns were not restored properly since the definition was split into the below three operations: CREATE TABLE foo .. ; CREATE SEQUENCE fooseq.. ; ALTER TABLE ONLY foo ALTER COLUMN .. SET DEFAULT nextval('fooseq'); The ALTER TABLE ONLY would fail with an ERROR stating that the attrdef must be applied to the partition members as well. Fix by identifying partitioning parents during table info gathering in pg_dump and omit ONLY in case of parent tables. (cherry picked from commit 68877c31)
-
由 Jimmy Yih 提交于
A `CREATE TABLE AS` without a `DISTRIBUTED BY` clause will create a randomly distributed table, when optimized by ORCA. The plan for the CTAS will have a redistribute motion (random) between the scan and the insert. Depending on your data, this style of plan could be more even, equally even, or less even than a hash distributed table (the kind of distribution usually assumed by planner). This commit changes the test to explicitly distribute by the same column that planner would guess. Co-authored-by: NJesse Zhang <sbjesse@gmail.com> (cherry picked from commit 7943d890)
-
- 06 9月, 2018 4 次提交
-
-
由 Omer Arap 提交于
This commit adds more log messages and updates existing log messages to increase logging verbosity. Signed-off-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
-
由 Taylor Vesely 提交于
Adds a facility to dump XLOG_HINT records with xlogdump. We have backported XLOG_HINT xlog records from upstream and this record type never existed at the time that xlogdump was originally created.
-
由 Asim R P 提交于
Previously, xlogdump would use segment ID obtained by parsing the xlog filename to compute current record's logid and offset. We came across at least one case where this logic fails, leading to current xlog locations completely unrelated to previous xlog location. We found that this can happen during xlog recycle when the segment will reuse previously allocated xlog segment files. This patch fixes the logic to use page start address recorded in header of each xlog page. Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 David Kimura 提交于
Issue is that filerep assumed that the LSN of page is always greater than LSN of the change tracking log for that page. This assumption was broken (in the specific case of hint bits) in 5.X by heap checksums back port from upstream. The breach in the assumption led to filerep resync incorrectly not syncing certain blocks from primary to mirror. As part of the heap-checksum back port, we introduced XLOG_HINT records to capture full page images in XLOG. A transaction that sets hints bits on a page emits XLOG_HINT record containing full image of that page. This is to avoid false alarms during checksum validation, in the event of a torn page write. After emitting XLOG_HINT record, the code ({{MarkBufferDirtyHint()}}) sets the page's LSN only if the page is not already marked dirty. The implication of this logic is that a page's LSN may remain lower than the LSN of the most recent XLOG record emitted for that page (which is also the LSN recorded in filerep change tracking log). Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-