1. 12 9月, 2018 1 次提交
  2. 11 9月, 2018 13 次提交
  3. 10 9月, 2018 1 次提交
  4. 08 9月, 2018 2 次提交
  5. 07 9月, 2018 19 次提交
    • J
      pg_upgrade: dump REINDEX instructions after upgrade from GPDB4 · 4bf31b15
      Jacob Champion 提交于
      GPDB5 changed relation indexes on disk, and so they are all invalidated
      when upgrading from 4. Rather than expecting the user to know what to do
      after that mass-invalidation, write a script to perform a REINDEX
      DATABASE for every db_name we have, and point the user to it.
      Co-authored-by: NAsim Praveen <apraveen@pivotal.io>
      
      (cherry picked from commit 0f2f52e1)
      4bf31b15
    • J
      Implement NUMERIC upgrade for AOCS versions < 8.3 · 6f30f4f8
      Jacob Champion 提交于
      8.2->8.3 upgrade of NUMERIC types was implemented for row-oriented AO
      tables, but not column-oriented. Correct that here.
      
      Store upgraded Datum data in a per-DatumStream buffer, to avoid
      "upgrading" the same data multiple times (multiple tuples may be
      pointing at the same data buffer, for example with RLE compression).
      Cache the column's base type in the DatumStreamRead struct.
      Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
      
      (cherry picked from commit 54895f54)
      6f30f4f8
    • J
      upgrade_tuple: fix isnull array deallocation · faf24c4b
      Jacob Champion 提交于
      The values array was being double-freed, causing a crash.
      
      (cherry picked from commit 0fd360b3)
      faf24c4b
    • J
      get_rel_infos: handle sequences during 8.2->8.3 upgrade · 913234f2
      Jacob Champion 提交于
      When upgrading to 8.4, upstream's pg_upgrade refuses to copy old
      sequence data due to a change in columns. However, that check also
      prevented copying of sequence data during an 8.2->8.3 upgrade --
      Postgres doesn't care about this case, but GPDB does.
      
      Don't omit the copy if we're upgrading to 8.3, and enable the 8.2 heap
      upgrade for sequence tables so that the page makes sense in the new
      database.
      
      (cherry picked from commit 907803c0)
      913234f2
    • J
      get_control_data: fix checksum version check · 8cdeff7e
      Jacob Champion 提交于
      Follow-up to 71916050. Checksums were backported to Postgres 8.3, so we
      ignore them for <= 8.2.
      
      While we're at it, switch the data_checksum_version from a false
      assignment to a zero assignment, to match upstream. The actual
      implementation is an unsigned int; Postgres master incorrectly uses a
      bool type in the declaration.
      
      (cherry picked from commit c687f53f)
      8cdeff7e
    • J
      pg_upgrade: partially backport checksum checks from master · 0dde62ec
      Jacob Champion 提交于
      Commit 71916050 adds support for adding and removing checksums during
      pg_upgrade. While we don't want to support this (yet) in 5.x, we do want
      the bugfixes made to the checksum version checks. Try to match master's
      output style in the error messages.
      0dde62ec
    • J
      pg_dump: correct version checks · 11c558d6
      Jacob Champion 提交于
      Follow-up to e8de956e. Correct the GPDB 5 check to use Postgres 8.3's
      version number, and correctly return the cached value from the GPDB 4
      check.
      
      (cherry picked from commit 86614433)
      11c558d6
    • H
      Speed improvements to pg_dump. · 3c2ee897
      Heikki Linnakangas 提交于
      * Don't query for external partitions in non-partitioned tables. The
        query for possible external partitions is fairly expensive, and it's
        pointless if the table is not partitioned at all.
      
      * Cache the results of server version checks.
      
      With these improvements, dumping the regression database takes about 25
      seconds on my laptop, vs. 70 seconds before.
      
      (cherry picked from commit e8de956e)
      3c2ee897
    • J
      pg_dump: continue minimizing EXTERNAL version-specific logic · ccf2c575
      Jacob Champion 提交于
      By making sure options is always part of the query, we no longer need
      version-specific variable assignment at all, and we can get rid of quite
      a bit of duplication.
      
      (cherry picked from commit 14444cf06bcd72adc75d8a48e2f6467c83620ce2)
      ccf2c575
    • J
      pg_dump: improve EXTERNAL TABLE query whitespace · f50fd241
      Jacob Champion 提交于
      Whitespace diff only; no other changes.
      
      (cherry picked from commit b5b8480923b84c899576a16d3b751dbc575b95c7)
      f50fd241
    • J
      pg_dump: remove trivial EXTERNAL TABLE diffs in dump · c05d4b29
      Jacob Champion 提交于
      We want identical external tables to dump identically, whether they are
      in GPDB 4 or 5, so that we minimize false negatives during pg_upgrade
      tests:
      
      - Only dump OPTIONS if the options exist, regardless of what version
        we're dumping from.
      - Always dump a correct ON clause; assume that older versions of
        location-based web tables are effectively running ON ALL.
      
      As part of this, improve the EXTERNAL TABLE dump logic and try to
      minimize differences between query results for different GPDB versions.
      Eventually, the query should mask all of those differences by itself.
      
      (cherry picked from commit 50160f8b47690f614304825cafab8b42c8518f8c)
      c05d4b29
    • J
      pg_dump: fix dump of 4.x EXTERNAL tables with ON clauses · 5b1b54d7
      Jacob Champion 提交于
      Follow-up to 4f4e5a5c. In GPDB 4, the ON clause information is stored in
      pg_exttable.location, not .command.
      
      (cherry picked from commit d514ddf6)
      5b1b54d7
    • P
      Allow some partition related code run for pg_upgrade. (#4774) · 9cf41b26
      Paul Guo 提交于
      For pg_upgrade, gpdb runs in GP_ROLE_UTILITY mode and with IsBinaryUpgrade set as true.
      In this patch we allow some partition related code run if IsBinaryUpgrade is true so that
      those partition related sql clauses, which are generated by pgdump, could recover
      previous partition schemas.
      Co-authored-by: NMax Yang <myang@pivotal.io>
      (cherry picked from commit ab129d0a)
      9cf41b26
    • J
      Backport fix for mistake in pg_dump partitioning query · f17e57d1
      Jacob Champion 提交于
      Partial backport of commit 18885365, which fixes the partitioning query
      for GPDB4.
      f17e57d1
    • D
      Disallow indexes on partitioned tables during upgrade · 4b3be03b
      Daniel Gustafsson 提交于
      There are numerous cornercases with restoring indexes on partition
      hierarchies, a set of which include:
      
      	* Indexes left on partition members from DROP INDEX commands
      	  on the partition parent
      	* Subpartition indexes created with non-standard names
      	* Indexes on partitions which stem from partition exchange
      
      Rather than adding code to cover all potential pitfalls, we simply
      won't allow indexes on partition hierarchies during upgrades, as
      they can be recreated after the upgrade without data loss.
      
      Also add code to the test_gpdb_pre.sql script to drop all such
      indexes before attempting an upgrade.
      
      (cherry picked from commit 404b1993)
      4b3be03b
    • D
      Make OLD_CLUSTER explicit for hash partition checks · be6ddf2b
      Daniel Gustafsson 提交于
      There is no reason to support checking for hash partitions in the
      new cluster, so make OLD_CLUSTER be the only option to simplify
      the code a bit.
      
      (cherry picked from commit 25c1c959)
      be6ddf2b
    • H
      Add check for hash partitioned tables in pg_upgrade. · f394ab59
      Heikki Linnakangas 提交于
      I was about to add this as part of the PostgreSQL 8.4 merge, as a check
      when upgrading from 8.3 to 8.4, because the hash algorithm was changed
      in 8.4. However, turns out that pg_dump doesn't support hash partitioned
      tables at all, so pg_upgrade won't work on a database that contains any
      hash partitioned tables, even on a same-version upgrade. Hence, let's
      add this check unconditionally on all server versions.
      
      There are comments talking about the hash function change, because of that
      devleopment history. I think that's useful documentation, just in case
      we ever start to support hash partitions in pg_dump, so I left it there.
      
      (cherry picked from commit 22072ec5)
      f394ab59
    • D
      Fix restore of SERIAL columns in partitioning · 615ac0e7
      Daniel Gustafsson 提交于
      Partitioning hierarchies with SERIAL columns were not restored
      properly since the definition was split into the below three
      operations:
      
      CREATE TABLE foo .. ;
      CREATE SEQUENCE fooseq.. ;
      ALTER TABLE ONLY foo ALTER COLUMN .. SET DEFAULT nextval('fooseq');
      
      The ALTER TABLE ONLY would fail with an ERROR stating that the
      attrdef must be applied to the partition members as well. Fix
      by identifying partitioning parents during table info gathering
      in pg_dump and omit ONLY in case of parent tables.
      
      (cherry picked from commit 68877c31)
      615ac0e7
    • J
      Fix intermittent issue with appendonly regression test · 9882f9c2
      Jimmy Yih 提交于
      A `CREATE TABLE AS` without a `DISTRIBUTED BY` clause will create a
      randomly distributed table, when optimized by ORCA. The plan for the
      CTAS will have a redistribute motion (random) between the scan and the
      insert. Depending on your data, this style of plan could be more even,
      equally even, or less even than a hash distributed table (the kind of
      distribution usually assumed by planner).
      
      This commit changes the test to explicitly distribute by the same column
      that planner would guess.
      Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
      (cherry picked from commit 7943d890)
      9882f9c2
  6. 06 9月, 2018 4 次提交
    • O
      Making incremental analyze more verbose · 58682ae1
      Omer Arap 提交于
      This commit adds more log messages and updates existing log messages to
      increase logging verbosity.
      Signed-off-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
      58682ae1
    • T
      Dump XLOG_HINT records with xlogdump · d501d536
      Taylor Vesely 提交于
      Adds a facility to dump XLOG_HINT records with xlogdump. We have backported
      XLOG_HINT xlog records from upstream and this record type never existed at the
      time that xlogdump was originally created.
      d501d536
    • A
      xlogdump should use page header to compute current record location · 84408606
      Asim R P 提交于
      Previously, xlogdump would use segment ID obtained by parsing the xlog filename
      to compute current record's logid and offset.  We came across at least one case
      where this logic fails, leading to current xlog locations completely unrelated
      to previous xlog location.  We found that this can happen during xlog recycle
      when the segment will reuse previously allocated xlog segment files.  This
      patch fixes the logic to use page start address recorded in header of each xlog
      page.
      Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
      84408606
    • D
      Relax filerep resync logic to send change tracking entries (#5651) · 492803b6
      David Kimura 提交于
      Issue is that filerep assumed that the LSN of page is always greater than LSN
      of the change tracking log for that page. This assumption was broken (in the
      specific case of hint bits) in 5.X by heap checksums back port from upstream.
      The breach in the assumption led to filerep resync incorrectly not syncing
      certain blocks from primary to mirror.
      
      As part of the heap-checksum back port, we introduced XLOG_HINT records to
      capture full page images in XLOG. A transaction that sets hints bits on a page
      emits XLOG_HINT record containing full image of that page. This is to avoid
      false alarms during checksum validation, in the event of a torn page write.
      After emitting XLOG_HINT record, the code ({{MarkBufferDirtyHint()}}) sets the
      page's LSN only if the page is not already marked dirty. The implication of
      this logic is that a page's LSN may remain lower than the LSN of the most
      recent XLOG record emitted for that page (which is also the LSN recorded in
      filerep change tracking log).
      Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
      492803b6