1. 03 8月, 2020 1 次提交
  2. 01 8月, 2020 1 次提交
    • B
      gpinitsystem: use new 6-field ARRAY format internally for QD and QEs · 27038bd4
      bhuvnesh chaudhary 提交于
      The initialization file (passed as gpinitsystem -I <file>) can have two
      formats: legacy (5-field) and new (6-field, that has the HOST_ADDRESS).
      
      This commit fixes a bug in which an internal sorting routine that matched
      a primary with its corresponding mirror assumed that <file> was always
      in the new format.  The fix is to convert any input <file> to the new
      format via re-writing the QD_ARRAY, PRIMARY_ARRAY and MIRROR_ARRAY to
      have 6 fields.  We also always use '~' as the separator instead of ':'
      for consistency.
      
      The bug fixed is that a 5-field <file> was being sorted numerically,
      causing either the hostname (on a multi-host cluster) or the port (on
      a single-host cluster) to be used to sort instead or the content.
      This could result in the primary and its corresponding mirror being
      created on different contents, which fortunately hit an internal error
      check.
      
      Unit tests and a behave test have been added as well.  The behave test
      uses a demo cluster to validate a legacy gpinitsystem initialization
      file format (e.g. one that has 5 fields) successfully creates a
      Greenplum database.
      Co-authored-by: NDavid Krieger <dkrieger@vmware.com>
      27038bd4
  3. 31 7月, 2020 4 次提交
    • A
      Correct and stabilize some replication tests · 15dd8027
      Ashwin Agrawal 提交于
      Adding pg_stat_clear_snapshot() in functions looping over
      gp_stat_replication / pg_stat_replication to refresh result everytime
      the query is run as part of same transaction. Without
      pg_stat_clear_snapshot() query result is not refreshed for
      pg_stat_activity neither for xx_stat_replication functions on multiple
      invocations inside a transaction. So, in absence of it the tests
      become flaky.
      
      Also, tests commit_blocking_on_standby and dtx_recovery_wait_lsn were
      initially committed with wrong expectations, hence were missing to
      test the intended behavior. Now reflect the correct expectation.
      
      (cherry picked from commit c565e988)
      15dd8027
    • A
      Add mirror_replay test to greenplum_schedule · 29ca99ee
      Ashwin Agrawal 提交于
      This was missed in commit 96b332c0.
      
      (cherry picked from commit 8ef5d722)
      29ca99ee
    • C
      Add knowledge of partition selectors to Orca's DPv2 algorithm (#10263) (#10558) · d3886cf2
      Chris Hajas 提交于
      Orca's DP algorithms currently generate logical alternatives based only on cardinality; they do not take into account motions/partition selectors as these are physical properties handled later in the optimization process. Since DPv2 doesn't generate all possible alternatives for the optimization stage, we end up generating alternatives that do not support partition selection or can only place poor partition selectors.
      
      This PR introduces partition knowledge into the DPv2 algorithm. If there is a possible partition selector, it will generate an alternative that considers it, in addition to the previous alternatives.
      
      We introduce new properties, m_contain_PS  to indicate whether a SExpressionInfo contains a PS for a particular expression. We consider an expression to have a possible partition selector if the join expression columns and the partition table's partition key overlap. If they do, we mark this expression as containing a PS for a particular PT.
      
      We consider a good PS one which is selective. Eg:
      ```
      - DTS
      - PS
         -TS
           - Pred
      ```
      
      would be selective. However, if there is no selective predicate, we do not consider this as a promising PS.
      
      For now, we add just a single alternative that satisfies this property and only consider linear trees.
      
      This is a backport of 9c445321
      d3886cf2
    • A
      Improve cardinality for joins using distribution columns in ORCA · 4b473948
      Ashuka Xue 提交于
      This commit only affects cardinality estimation in ORCA when the user
      sets `optimizer_damping_factor_join = 0`. It improves the square root
      algorithm first introduced by commit ce453cf2.
      
      In the original square root  algorithm, we assumed that distribution
      column predicates would have some correlation with other predicates in
      the join and therefore would be accordingly damped when calculating join
      cardinality.
      
      However, distribution columns are ideally unique in order to gain the
      best performance for GPDB. Under this assumption, distribution columns
      should not be correlated and thus needed to be treated as independent
      when calculating join cardinality. This is a best guess since we do not
      have a way to support correlated columns at this time.
      Co-authored-by: NAshuka Xue <axue@vmware.com>
      Co-authored-by: NChris Hajas <chajas@vmware.com>
      4b473948
  4. 30 7月, 2020 1 次提交
    • D
      Add Orca support for index only scan · 93c9829a
      David Kimura 提交于
      This commit allows Orca to select plans that leverage IndexOnlyScan
      node. A new GUC 'optimizer_enable_indexonlyscan' is used to enable or
      disable this feature. Index only scan is disabled by default, until the
      following issues are addressed:
      
        1) Implement cost comparison model for index only scans. Currently,
           cost is hard coded for testing purposes.
        2) Support index only scan using GiST and SP-GiST as allowed.
           Currently, code only supports index only scans on b-tree index.
      Co-authored-by: NChris Hajas <chajas@vmware.com>
      (cherry picked from commit 3b72df18)
      93c9829a
  5. 29 7月, 2020 20 次提交
  6. 28 7月, 2020 2 次提交
    • P
      Fix flaky test isolation2:pg_basebackup_with_tablespaces (#10509) · 5783fa3a
      Paul Guo 提交于
      Here is the diff output of the test result.
      
       drop database some_database_without_tablespace;
       -DROP
       +ERROR:  database "some_database_without_tablespace" is being accessed by other users
       +DETAIL:  There is 1 other session using the database.
       drop tablespace some_basebackup_tablespace;
       -DROP
       +ERROR:  tablespace "some_basebackup_tablespace" is not empty
      
      The reason is that after client connection to the database exits, the server
      needs some time (the process might be scheduled out soon, and the operation
      needs to content for the ProcArrayLock lock) to release the PGPROC in
      proc_exit()->ProcArrayRemove(). During dropdb() (for database drop), postgres
      will call CountOtherDBBackends() to see if there are still sessions that are
      using the database by checking proc->databaseId, and it will try at most 5 sec.
      This test quits the db connection of some_database_without_tablespace and then
      drops the database immediately. This should be mostly fine but if the system is
      in slow or in heavy load, this still could lead to test flakiness.
      
      This issue could be simulated using gdb. Let's poll until database drop
      commands succeeds for the affected database.  It seems that drop database sql
      command could not be in transaction block so I could not use plpgsql to
      implement, instead I use dropdb utility and bash command to implement that.
      Reviewed-by: NAsim R P <pasim@vmware.com>
      (cherry picked from commit c8b00ac7)
      5783fa3a
    • M
      docs - PL/Container 3 supports the DO command - 6.x · 66242858
      mkiyama 提交于
      Also, fix bad cross-ref.
      66242858
  7. 23 7月, 2020 4 次提交
    • H
      Change log level in ExecChooseHashTableSize · 60d50cd6
      Hubert Zhang 提交于
      ExecChooseHashTableSize() is a hot function which is not only called by executor,
      but also by planner. Planner will call this function when calcualting cost for
      each join path. The number of join path grow exponentially with the number of
      table. As a result, do not using elog(LOG) to avoid generating too many logs.
      
      (cherry picked from commit 6b4d93c5)
      60d50cd6
    • P
      Update pre-allocated shared snapshot slot number. · 1b0195c9
      Paul Guo 提交于
      Previously it used max_prepared_xacts for shared snapshot slot number. The
      reason that it does not use MaxBackends, per comment, is that ideally on QE we
      want to use QD MaxBackends for the slot number, and note usually QE MaxBackends
      should be greater than QD MaxBackends due to potential multiple gangs per
      query. The code previously used max_prepared_xacts finally for the shared
      snapshot slot number calculation. That is not correctly given we have read-only
      query, and we have one-phase commit now.  Let's use MaxBackends for shared
      snapshot slot number calculation for safety though this might waste some memory.
      Reviewed-by: Nxiong-gang <gxiong@pivotal.io>
      (cherry picked from commit f6c59503)
      1b0195c9
    • P
      Limit gxact number on master with MaxBackends. · 0c57d9fc
      Paul Guo 提交于
      Previously we assign it as max_prepared_xacts. It is used to initialize some
      2pc related shared memory. For example the array shmCommittedGxactArray is
      created with this length and that array is used to collect not-yet "forgotten"
      distributed transactions during master/standby recovery, but the array length
      might be problematic since:
      
      1. If master max_prepared_xacts is equal to segment max_prepared_xacts as
      usual.  It is possible some distributed transactions use just partial gang so
      the total distributed transactions might be larger (and even much larger) than
      max_prepared_xacts. The document says max_prepared_xacts should be greater than
      max_connections but there is no code to enforce that.
      
      2. Also it is possible that master max_prepared_xacts might be different than
      segment max_prepared_xacts (although the document does not suggest it there is
      no code to enforce that),
      
      To fix that we use MaxBackends for the gxact number on master. We may just use
      guc max_connections (MaxBackends includes number for autovacuum workers and bg
      workers additionally besides guc max_connections), but I'm conservatively using
      MaxBackends,  since this issue is annoying - standby can not recover due to the
      FATAL message as below even after postgres reboot unless we temporarily
      increase the guc max_prepared_transactions value.
      
      2020-07-17 16:48:19.178667
      CST,,,p33652,th1972721600,,,,0,,,seg-1,,,,,"FATAL","XX000","the limit of 3
      distributed transactions has been reached","It should not happen. Temporarily
      increase max_connections (need postmaster reboot) on the postgres (master or
      standby) to work around this issue and then report a bug",,,,"xlog redo at
      0/C339BA0 for Transaction/DISTRIBUTED_COMMIT: distributed commit 2020-07-17
      16:48:19.101832+08 gid = 1594975696-0000000009, gxid =
      9",,0,,"cdbdtxrecovery.c",571,"Stack trace:
      
      1    0xb3a30f postgres errstart (elog.c:558)
      2    0xc3da4d postgres redoDistributedCommitRecord (cdbdtxrecovery.c:565)
      3    0x564227 postgres <symbol not found> (xact.c:6942)
      4    0x564671 postgres xact_redo (xact.c:7080)
      5    0x56fee5 postgres StartupXLOG (xlog.c:7207)
      Reviewed-by: Nxiong-gang <gxiong@pivotal.io>
      (cherry picked from commit 2a961e65)
      0c57d9fc
    • P
      Make test function wait_for_replication_replay() a common UDF. · e6addf3a
      Paul Guo 提交于
      We need that in more than one test.
      Reviewed-by: Nxiong-gang <gxiong@pivotal.io>
      (cherry picked from commit af942980)
      e6addf3a
  8. 22 7月, 2020 4 次提交
    • Z
      Correct plan of general & segmentGeneral path with volatiole functions. · 5b4c4f59
      Zhenghua Lyu 提交于
      General and segmentGeneral locus imply that if the corresponding slice
      is executed in many different segments should provide the same result
      data set. Thus, in some cases, General and segmentGeneral can be
      treated like broadcast.
      
      But what if the segmentGeneral and general locus path contain volatile
      functions? volatile functions, by definition, do not guarantee results
      of different invokes. So for such cases, they lose the property and
      cannot be treated as *general. Previously, Greenplum planner
      does not handle these cases correctly. Limit general or segmentgeneral
      path also has such issue.
      
      The fix idea of this commit is: when we find the pattern (a general or
      segmentGeneral locus paths contain volatile functions), we create a
      motion path above it to turn its locus to singleQE and then create a
      projection path. Then the core job becomes how we choose the places to
      check:
      
        1. For a single base rel, we should only check its restriction, this is
           the at bottom of planner, this is at the function set_rel_pathlist
        2. When creating a join path, if the join locus is general or segmentGeneral,
           check its joinqual to see if it contains volatile functions
        3. When handling subquery, we will invoke set_subquery_pathlist function,
           at the end of this function, check the targetlist and havingQual
        4. When creating limit path, the check and change algorithm should also be used
        5. Correctly handle make_subplan
      
      OrderBy clause and Group Clause should be included in targetlist and handled
      by the above Step 3.
      
      Also this commit fixes DMLs on replicated table. Update & Delete Statement on
      a replicated table is special. These statements have to be dispatched to each
      segment to execute. So if they contain volatile functions in their targetList
      or where clause, we should reject such statements:
      
        1. For targetList, we check it at the function create_motion_path_for_upddel
        2. For where clause, they will be handled in the query planner and if we
           find the pattern and want to fix it, do another check if we are updating
           or deleting replicated table, if so reject the statement.
      
      CherryPick from commit d1f9b96b from master to 6X.
      5b4c4f59
    • P
      Use postgres database for pg_rewind cleanly shutdown execution to avoid potential pg_rewind hang. · 777a4cdc
      Paul Guo 提交于
      During testing, I encountered an incremental gprecoverseg hang issue.
      Incremental gprecoverseg is based on pg_rewind.  pg_rewind launches a single
      mode postgres process and quits after crash recovery if the postgres instance
      was not cleanly shut down - this is used to ensure that the postgres is in a
      consistent state before doing incremental recovery. I found that the single
      mode postgres hangs with the below stack.
      
      \#1  0x00000000008cf2d6 in PGSemaphoreLock (sema=0x7f238274a4b0, interruptOK=1 '\001') at pg_sema.c:422
      \#2  0x00000000009614ed in ProcSleep (locallock=0x2c783c0, lockMethodTable=0xddb140 <default_lockmethod>) at proc.c:1347
      \#3  0x000000000095a0c1 in WaitOnLock (locallock=0x2c783c0, owner=0x2cbf950) at lock.c:1853
      \#4  0x0000000000958e3a in LockAcquireExtended (locktag=0x7ffde826aa60, lockmode=3, sessionLock=0 '\000', dontWait=0 '\000', reportMemoryError=1 '\001', locallockp=0x0) at lock.c:1155
      \#5  0x0000000000957e64 in LockAcquire (locktag=0x7ffde826aa60, lockmode=3, sessionLock=0 '\000', dontWait=0 '\000') at lock.c:700
      \#6  0x000000000095728c in LockSharedObject (classid=1262, objid=1, objsubid=0, lockmode=3) at lmgr.c:939
      \#7  0x0000000000b0152b in InitPostgres (in_dbname=0x2c769f0 "template1", dboid=0, username=0x2c59340 "gpadmin", out_dbname=0x0) at postinit.c:1019
      \#8  0x000000000097b970 in PostgresMain (argc=5, argv=0x2c51990, dbname=0x2c769f0 "template1", username=0x2c59340 "gpadmin") at postgres.c:4820
      \#9  0x00000000007dc432 in main (argc=5, argv=0x2c51990) at main.c:241
      
      It tries to hold the lock for template1 on pg_database with lockmode 3 but
      it conflicts with the lock with lockmode 5 which was held by a recovered dtx
      transaction in startup RecoverPreparedTransactions(). Typically the dtx
      transaction comes from "create database" (by default the template database is
      template1).
      
      Fixing this by using the postgres database for single mode postgres execution.
      The postgres database is commonly used in many background worker backends like
      dtx recovery, gdd and ftsprobe. With this change, we do not need to worry
      about "create database" with template postgres, etc since they won't succeed,
      thus avoid the lock conflict.
      
      We may be able to fix this in InitPostgres() by bypassing the locking code in
      single mode but the current fix seems to be safer.  Note InitPostgres()
      locks/unlocks some other catalog tables also but almost all of them are using
      lock mode 1 (except mode 3 pg_resqueuecapability per debugging output).  It
      seems that it is not usual in real scenario to have a dtx transaction that
      locks catalog with mode 8 which conflicts with mode 1.  If we encounter this
      later we need to think out a better (might not be trivial) solution for this.
      For now let's fix the issue we encountered at first.
      
      Note in this patch the code fixes in buildMirrorSegments.py and twophase.c are
      not related to this patch. They do not seem to be strict bugs but we'd better
      fix them to avoid potential issues in the future.
      Reviewed-by: NAshwin Agrawal <aashwin@vmware.com>
      Reviewed-by: NAsim R P <pasim@vmware.com>
      (cherry picked from commit 288908f3)
      777a4cdc
    • P
      Fix "Too many distributed transactions for snapshot" (#10500) · af8932a0
      Paul Guo 提交于
      Now that we do not have to use full gang for distributed transaction, that
      makes in-progress distributed transaction on master might be greater than
      max_prepared_xacts if max_prepared_xacts is configured with a small value.
      max_prepared_xacts is used to as the inProgressXidArray length for distributed
      snapshot. This might lead to distributed snapshot creation failure due to "Too
      many distributed transactions for snapshot" if the system is in heavy 2pc load.
      Fixing this by using GetMaxSnapshotXidCount() for the length of array
      inProgressXidArray, following the setting on master.
      
      This fixes github issue https://github.com/greenplum-db/gpdb/issues/10057
      
      No test for this since test isolation2:prepare_limit already covered this.  (I
      encountered this issue when backporting a PR that introduces the test
      isolation2:prepare_limit, so need to push this at first then the backporting
      PR).
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      af8932a0
    • Z
      Fix cdbpath_dedup_fixup does not consider merge append path. · 0085ad2a
      Zhenghua Lyu 提交于
      Greenplum use unique row id path as a candidate to implement semijoin.
      It is introduced long before. But GPDB6 has upgraded the kernel
      version to Postgres 9.4 and introduced many new path types and
      new plan nodes, thus cdbpath_dedup_fixup failed to consider them.
      Some typical issues are: https://github.com/greenplum-db/gpdb/issues/9427
      
      On Master branch, Heikki's commit 9628a332 refactored this part of code
      so that it is OK on master. And for 4X and 5X, we do not have many new
      kinds of plannode and pathnode, it is also OK.
      
      It is very hard to backport commit 9628a332 to 6X, there is no concept of
      a Path's target list in 9.4. And to totally remove this kind of path
      is too overkilling. So the policy is to fix them one bye one if reported.
      0085ad2a
  9. 21 7月, 2020 2 次提交
  10. 20 7月, 2020 1 次提交