1. 17 4月, 2018 5 次提交
    • P
      Fix 'distribution_policy' issue on replicated table by gpcheckcat · 5001751a
      Pengzhou Tang 提交于
      'distribution_policy' test do constraints check on randomly-distributed
      tables, however, attrnums = null in gp_distribution_policy is no longer
      effective to identify a randomly distributed table after we involved new
      replicated distributed policy, so add more filter to make things right.
      5001751a
    • D
      Disable large objects · 6a343b61
      David Kimura 提交于
      Large objects are currently not supported in Greenplum. Rather than deceive the
      user with a non-functional large object api, we disable them for now.
      
      We disable the large object tests in privileges regress test by using ignore
      blocks instead of commenting them out or deleting them to reduce merge
      conflicts in future postgres merges.
      Co-authored-by: NJimmy Yih <jyih@pivotal.io>
      6a343b61
    • S
      Bump ORCA version to 2.55.20 · 5d60287b
      Sambitesh Dash 提交于
      5d60287b
    • M
      docs: add guc gp_max_slices (#4854) · c37385ab
      Mel Kiyama 提交于
      c37385ab
    • M
      docs: Add guc verify_gpfdists_cert (#4851) · 277f31c8
      Mel Kiyama 提交于
      * docs: Add guc verify_gpfdists_cert
      
      -added guc definition to list of gucs
      -added link to guc from appropriate topics.
      
      PR for 5X_STABLE
      Will be ported to MAIN
      
      * docs:  verify_gpfdists_cert guc updates
      -add SSL exceptions that are ignored
      -other minor edits
      
      * docs: guc verify_gpfdists_cert - fix typos
      277f31c8
  2. 14 4月, 2018 4 次提交
    • A
      Remove AppendOnlyStorage_GetUsableBlockSize(). · 0a119de3
      Ashwin Agrawal 提交于
      When the blocksize is 2MB, the function AppendOnlyStorage_GetUsableBlockSize
      would give out the wrong usable block size. The expected result is 2MB. But the
      return value of the function call would give out (2M -4). This is because the
      macro AOSmallContentHeader_MaxLength is defined as (2M -1). After rounding down
      to 4 byte aligned, the result is (2M - 4).
      
      Without the fix can encounter errors as follows: "ERROR: Used length 2097152
      greater than bufferLen 2097148 at position 8388592 in table 'xxxx'".
      
      Also removed some related, but unused macro variables, just for cleaning up
      codes related to AO storage.
      Co-authored-by: NLirong Jian <jian@hashdata.cn>
      0a119de3
    • A
      Use relmapping file in gp_replica_check. · da277082
      Ashwin Agrawal 提交于
      commit b9b8831a introduced "relation mapping"
      infrastructure, which stores relfilenode as "0" in pg_class for shared and
      nailed catalog tables. The actual relfilenode is stored in separate file which
      provides mapping from OID to relfilenode. gp_replica_check builds hashtable for
      relfilnodes to perform lookup for actual files on-disk. So, while populating
      this hashtable, consult relmap file to get relfilenode for tables with
      relfilenode = 0.
      
      This should help avoid seeing WARNINGs like "relfilenode XXXX not present in
      primary's pg_class" or "found extra unknown file on mirror:" for
      gp_replica_check.
      Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      da277082
    • V
      Remove FIXME in statistic computation of planner · b7abfcbf
      Venkatesh Raghavan 提交于
      Changes in 8.4 for statistics computation is comprehensive.
      Will fix any fallouts of this change as we observe them.
      Current tests show that things are kosher so removing the fix me.
      b7abfcbf
    • B
      Bump ORCA to v2.55.19 · 9a70b244
      Bhuvnesh Chaudhary 提交于
      9a70b244
  3. 13 4月, 2018 17 次提交
  4. 12 4月, 2018 3 次提交
    • J
      Implement NUMERIC upgrade for AOCS versions < 8.3 · 54895f54
      Jacob Champion 提交于
      8.2->8.3 upgrade of NUMERIC types was implemented for row-oriented AO
      tables, but not column-oriented. Correct that here.
      
      Store upgraded Datum data in a per-DatumStream buffer, to avoid
      "upgrading" the same data multiple times (multiple tuples may be
      pointing at the same data buffer, for example with RLE compression).
      Cache the column's base type in the DatumStreamRead struct.
      Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
      54895f54
    • B
      Revert "Fix bug that planner generates redundant motion for joins on distribution key" · 2a326e59
      Bhuvnesh Chaudhary 提交于
      This reverts commit 8b0a7fed.
      
      Due to this commit, full join queries with condition on varchar columns
      started failing due to the below error. It is expected that there is a
      relabelnode on top of varchar columns while looking up the sort
      operator, however because of the said commit we removed the relabelnode.
      
      ```sql
      create table foo(a varchar(30), b varchar(30));
      postgres=# select X.a from foo X full join (select a from foo group by 1) Y ON X.a = Y.a;
      ERROR:  could not find member 1(1043,1043) of opfamily 1994 (createplan.c:4664)
      ```
      
      Will reopen the issue #4175 which brought this patch.
      2a326e59
    • A
      Add missing #ifdef block in aset.c (#4704) · 3ddbb283
      Andreas Scherbaum 提交于
      3ddbb283
  5. 11 4月, 2018 8 次提交
    • P
      Add a GUC to limit the number of slices for a query · d716a92f
      Pengzhou Tang 提交于
      Executing a query plan containing a large number of slices may slow down
      the entire Greenplum cluster: each "n-gang" slice corresponds to a
      separate process per segment. An example of such queries is a UNION ALL
      atop several complex views. To prevent such a situation, add a GUC
      gp_max_slices and refuse to execute plans of which the number of slices
      exceed that limit.
      Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
      d716a92f
    • X
      Add missing header file of gppc · adba45a9
      xiong-gang 提交于
      adba45a9
    • X
      Add GUC verify_gpfdists_cert · d66a7a1f
      xiong-gang 提交于
      This GUC determines whether curl verifies the authenticity of the
      gpfdist's certificate
      d66a7a1f
    • A
      CI: Use larger instance type for icw sles12 tests · ba5cfdbd
      Alexandra Wang 提交于
      Update template pipeline to reflect this change
      553b8754Authored-by: NAlexandra Wang <lewang@pivotal.io>
      ba5cfdbd
    • A
      CI: Replace top level `*_anchor:`s with a single list of `anchors` · 26d69d7e
      Alexandra Wang 提交于
      Re-apply 3a772cfd , this commit was
      accidently overwritten when introducing the CCP 2.0 change.
      Authored-by: NAlexandra Wang <lewang@pivotal.io>
      26d69d7e
    • D
      Concourse: Make setup_gpadmin_user.bash sourceable · 91bb0d6f
      David Sharp 提交于
      Authored-by: NDavid Sharp <dsharp@pivotal.io>
      91bb0d6f
    • B
      Fix Analyze privilege issue when executed by superuser · 3c139b9f
      Bhuvnesh Chaudhary 提交于
      The patch 62aba765 from upstream fixed
      the CVE-2009-4136 (security vulnerability) with the intent to properly
      manage session-local state during execution of an index function by a
      database superuser, which in some cases allowed remote authenticated
      users to gain privileges via a table with crafted index functions.
      
      Looking into the details of the CVE-2009-4136 and related CVE-2007-6600,
      the patch should ideally have limited the scope while we calculate the
      stats on the index expressions, where we run functions to evaluate the
      expression and could potentially present a security threat.
      
      However, the patch changed the user to table owner before collecting the
      sample, due to which even if analyze was run by superuser the sample
      couldn't be collected as the table owner did not had sufficient
      privileges to access the table. With this commit, we switch back to the
      original user while collecting the sample as it does not deal with
      indexes, or function call which was the original intention of the patch.
      
      Upstream did not face the privilege issue, as it does block sampling
      instead of issuing a query.
      Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
      3c139b9f
    • A
      Address GPDB_84_MERGE_FIXME in simplify_EXISTS_query() · 99450728
      Abhijit Subramanya 提交于
      This FIXME is two-fold:
      - Handling LIMIT 0
        The LIMIT is already handled in the caller,
        convert_EXISTS_sublink_to_join(): When an existential sublink contains
        an aggregate without GROUP BY or HAVING, we can safely replace it by a
        one-time TRUE/FALSE filter based on the type of sublink since the
        result of aggregate is always going to be one row even if it's input
        rows are 0.  However this assumption is incorrect when sublink
        contains LIMIT/OFFSET, such as, if the final limit count after
        applying the offset is 0.
      
      - Rules for demoting HAVING to WHERE
        previously, simplify_EXISTS_query() only disallowed demoting HAVING
        quals to WHERE, if it did not contain any aggregates. To determine the
        same, previously it used query->hasAggs, which is incorrect
        since hasAggs indicates that aggregate is present either in targetlist
        or HAVING.  This penalized the queries, wherein HAVING did not contain
        the agg but targetlist did (as demonstrated in the newly added test).
        This check is now replaced by contain_aggs_of_level().  Also, do not
        demote if HAVING contains volatile functions since they need to be
        evaluated once per group.
      Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
      99450728
  6. 10 4月, 2018 3 次提交