1. 01 4月, 2017 21 次提交
    • C
    • H
      Rewrite kerberos tests (#2087) · 2415aff4
      Heikki Linnakangas 提交于
      * Rewrite Kerberos test suite
      
      * Remove obsolete Kerberos test stuff from pipeline and TINC
      
      We now have a rewritten Kerberos test script in installcheck-world.
      
      * Update ICW kerberos test to run on concourse container
      
      This adds a whole new test script in src/test/regress, implemented in plain bash. It sets up temporary a KDC as part of the script, and doesn't therefore rely on a pre-existing Kerberos server, like the old MU_kerberos-smoke test job did. It does require MIT Kerberos server-side utilities to be installed, instead, but no server needs to be running, and no superuser privileges are required.
      
      This supersedes the MU_kerberos-smoke behave tests. The new rewritten bash script tests the same things:
        1. You cannot connect to the server before running 'kinit' (to verify that the server doesn't just let anyone in, which could happen if the pg_hba.conf is misconfigured for the test, for example)
        2. You can connect, after running 'kinit'
        3. You can no longer connect, if the user account is expired
      
      The new test script is hooked up to the top-level installcheck-world target.
      
      There were also some Kerberos-related tests in TINC. Remove all that, too. They didn't seem interesting in the first place, looks like they were just copies of a few random other tests, intended to be run as a smoke test, after a connection had been authenticated with Kerberos, but there was nothing in there to actually set up the Kerberos environment in TINC.
      
      Other minor patches added:
      
      * Remove absolute path when calling kerberos utilities
      -- assume they are on path, so that they can be accessed from various installs
      -- add clarification message if sample kerberos utility is not found with 'which'
      
      * Specify empty load library for kerberos tools
      
      * Move kerberos test to its own script file
      -- this allows a failure to be recorded without exiting Make, and
      therefore the server can be turned off always
      
      * Add trap for stopping kerberos server in all cases
      * Use localhost for kerberos connection
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      Signed-off-by: NChumki Roy <croy@pivotal.io>
      Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
      2415aff4
    • H
      Fix error message, if EXCHANGE PARTITION with multiple constraints fails. · 30400ddc
      Heikki Linnakangas 提交于
      The loop to print each constraint's name was broken: it printed the name of
      the first constraint multiple times. A test case, as matter of principle.
      
      In the passing, change the set of tests around this error to all use the
      same partitioned table, rather than drop and recreate it for each command.
      And reduce the number of partitions from 10 to 5. Shaves some milliseconds
      from the time to run the test.
      30400ddc
    • T
      Move sles iwc job downstream from compile · a53c05d8
      Tom Meyer 提交于
      Signed-off-by: NJingyi Mei <jmei@pivotal.io>
      a53c05d8
    • T
      Add installer header for sles 11 · 830448fd
      Tom Meyer 提交于
      Signed-off-by: NJingyi Mei <jmei@pivotal.io>
      830448fd
    • J
      Set max_stack_depth explicitly in subtransaction_limit ICG test · a5e26310
      Jingyi Mei 提交于
      This comes from 4.3_STABLE repo
      Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
      a5e26310
    • T
      test: point suse11 openssl to suse10 · 18e39aa7
      Tom Meyer 提交于
      Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
      18e39aa7
    • T
      Revert "Test: try removing OpenSSL from SLES; rely on openssl from third party" · 40e77c6d
      Toolsmiths Team 提交于
      This reverts commit 3531b6f681a05352f46f2f57861837f1ced2c6a0.
      40e77c6d
    • T
    • T
      Add SLES11 related pipeline changes · 6bb5668e
      Tom Meyer 提交于
      - This only includes ICW and packaging
      
      [#139228021]
      Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
      6bb5668e
    • D
      ivy: Use Python 2.7.12 built specifically for SLES · ac25d1dc
      David Sharp 提交于
      Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
      ac25d1dc
    • F
      Rule based partition selection for list (sub)partitions (#2076) · 5cecfcd1
      foyzur 提交于
      GPDB supports range and list partitions. Range partitions are represented as a set of rules. Each rule defines the boundaries of a part. E.g., a rule might say that a part contains all values between (0, 5], where left bound is 0 exclusive, but the right bound is 5, inclusive. List partitions are defined by a list of values that the part will contain. 
      
      ORCA uses the above rule definition to generate expressions that determine which partitions need to be scanned. These expressions are of the following types:
      
      1. Equality predicate as in PartitionSelectorState->levelEqExpressions: If we have a simple equality on partitioning key (e.g., part_key = 1).
      
      2. General predicate as in PartitionSelectorState->levelExpressions: If we need more complex composition, including non-equality such as part_key > 1.
      
      Note:  We also have residual predicate, which the optimizer currently doesn't use. We are planning to remove this dead code soon.
      
      Prior to  this PR, ORCA was treating both range and list partitions as range partitions. This meant that each list part will be converted to a set of list values and each of these values will become a single point range partition.
      
      E.g., consider the DDL:
      
      ```sql
      CREATE TABLE DATE_PARTS (id int, year int, month int, day int, region text)
      DISTRIBUTED BY (id)
      PARTITION BY RANGE (year)
          SUBPARTITION BY LIST (month)
             SUBPARTITION TEMPLATE (
              SUBPARTITION Q1 VALUES (1, 2, 3), 
              SUBPARTITION Q2 VALUES (4 ,5 ,6),
              SUBPARTITION Q3 VALUES (7, 8, 9),
              SUBPARTITION Q4 VALUES (10, 11, 12),
              DEFAULT SUBPARTITION other_months )
      ( START (2002) END (2012) EVERY (1), 
        DEFAULT PARTITION outlying_years );
      ```
      
      Here we partition the months as list partition using quarters. So, each of the list part will contain three months. Now consider a query on this table:
      
      ```sql
      select * from DATE_PARTS where month between 1 and 3;
      ```
      
      Prior to this ORCA generated plan would consider each value of the Q1 as a separate range part with just one point range. I.e., we will have 3 virtual parts to evaluate for just one Q1: [1], [2], [3]. This approach is inefficient. The problem is further exacerbated when we have multi-level partitioning. Consider the list part of the above example. We have only 4 rules for 4 different quarters, but we will have 12 different virtual rule (aka constraints). For each such constraint, we will then evaluate the entire subtree of partitions.
      
      After this PR, we no longer decompose rules into constraints for list parts and then derive single point virtual range partitions based on those constraints. Rather, the new ORCA changes will use ScalarArrayOp to express selectivity on a list of values. So, the expression for the above SQL will look like 1 <= ANY {month_part} AND 3 >= ANY {month_part}, where month_part will be substituted at runtime with different list of values for each of quarterly partitions. We will end up evaluating that expressions 4 times with the following list of values:
      
      Q1: 1 <= ANY {1,2,3} AND 3 >= ANY {1,2,3}
      Q2: 1 <= ANY {4,5,6} AND 3 >= ANY {4,5,6}
      ...
      
      Compare this to the previous approach, where we will end up evaluating 12 different expressions, each time for a single point value:
      
      First constraint of Q1: 1 <= 1 AND 3 >= 1
      Second constraint of Q1: 1 <= 2 AND 3 >= 2
      Third constraint of Q1: 1 <= 3 AND 3 >= 3
      First constraint of Q2: 1 <= 4 AND 3 >= 4
      ...
      
      The ScalarArrayOp depends on a new type of expression PartListRuleExpr that can convert a list rule to an array of values. ORCA specific changes can be found here: https://github.com/greenplum-db/gporca/pull/149
      5cecfcd1
    • A
      Fix XidlimitsTests, avoid going back after bumping the xid. · 5b2ea684
      Ashwin Agrawal 提交于
      auto-vacuum limit would be reached first and then warn limit, followed by other
      limits. So, there is no reason to rollback after bumping the xid to auto-vacuum
      limit. Can land in all kinds of wierd issues by doing the same. Practically,
      these tests need to be fully re-written maybe by modifying the GUCs and then
      actually generating XID to reach the same instead of similating by bumping the
      counter, but will attempt that in another commit.
      5b2ea684
    • A
      Avoid PANIC fetch transactionID outside critical section. · 6e8a00b3
      Ashwin Agrawal 提交于
      In some of persistent table functions, GetTopTransactionId() was called inside
      critical section. With laxy xid allocation if at DDL time transactionId stop
      limit has reached, this causes simple ERROR upgraded to PANIC. Hence modify code
      to call GetTopTransactionId() before entering critical section to avoid PANIC
      but just have ERROR as before.
      6e8a00b3
    • A
      Cleanup LocalDistribXactData related code. · 8c20bc94
      Ashwin Agrawal 提交于
      Commit fb86c90d "Simplify management of
      distributed transactions." cleanedup lot of code for LocalDistribXactData and
      introduced LocalDistribXactData in PROC for debugging purpose. But it's only
      correctly maintained for QE's, QD never populated LocalDistribXactData in
      MyProc. Instead TMGXACT also had LocalDistribXactData which was just set
      initially for QD but never updated later and confused more than serving the
      purpose. Hence removing LocalDistribXactData from TMGXACT, as it already has
      other fields which provide required information. Also, cleaned-up QD related
      states as even in PROC only QE uses LocalDistribXactData.
      8c20bc94
    • A
      Fully enable lazy XID allocation in GPDB. · 0932453d
      Ashwin Agrawal 提交于
      As part of 8.3 merge, upstream commit 295e6398
      "Implement lazy XID allocation" was merged. But transactionIds were still
      allocated in StartTransaction as code changes required to make it work for GPDB
      with distrbuted transaction was pending, thereby feature remained as
      disabled. Some progress was made by commit
      a54d84a3 "Avoid assigning an XID to
      DTX_CONTEXT_QE_AUTO_COMMIT_IMPLICIT queries." Now this commit addresses the
      pending work needed for handling deferred xid allocation correctly with
      distributed transactions and fully enables the feature.
      
      Important highlights of changes:
      
      1] Modify xlog write and xlog replay record for DISTRIBUTED_COMMIT. Even if
      transacion is read-only for master and no xid is allocated to it, it can still
      be distributed transaction and hence needs to persist itself in such a case. So,
      write xlog record even if no local xid is assigned but transaction is
      prepared. Similarly during xlog replay of the XLOG_XACT_DISTRIBUTED_COMMIT type,
      perform distributed commit recovery ignoring local commit. Which also means for
      this case don't commit to distrbuted log, as its only used to perform reverse
      map of localxid to distributed xid.
      
      2] Remove localXID from gxact, as its no more needed to be maintained and used.
      
      3] Refactor code for QE Reader StartTransaction. There used to be wait-loop with
      sleep checking to see if SharedLocalSnapshotSlot has distributed XID same as
      that of READER to assign reader some xid as that of writer, for SET type
      commands till READER actually performs GetSnapShotData(). Since now a) writer is
      not going to have valid xid till it performs some write, writers transactionId
      turns out InvalidTransaction always here and b) read operations like SET doesn't
      need xid, any more hence need for this wait is gone.
      
      4] Thow error if using distributed transaction without distributed xid. Earlier
      AssignTransactionId() was called for this case in StartTransaction() but such
      scenario doesn't exist hence convert it to ERROR.
      
      5] QD earlier during snapshot creation in createDtxSnapshot() was able to assign
      localXid in inProgressEntryArray corresponding to distribXid, as localXid was
      known by that time. That's no more the case and localXid mostly will get
      assigned after snapshot is taken. Hence now even for QD similar to QE's snapshot
      creation time localXid is not populated but later found in
      DistributedSnapshotWithLocalMapping_CommittedTest(). There is chance to optimize
      and try to match earlier behavior somewhat by populating gxact in
      AssignTransactionId() once locakXid is known but currently seems not so much
      worth it as QE's anyways have to perform the lookups.
      0932453d
    • A
    • A
      bc967e0b
    • A
      Make storage test robust by checking if DB up. · 2cd7fd17
      Ashwin Agrawal 提交于
      2cd7fd17
    • A
      Optimize distributed xact commit check. · 692be1a1
      Ashwin Agrawal 提交于
      Leverage the fact that inProgressEntryArray is sorted based on distribXid while
      creating the snapshot in createDtxSnapshot. So, can break out fast in function
      DistributedSnapshotWithLocalMapping_CommittedTest().
      692be1a1
    • A
      Use ReadNewTransactionId() for xid_age(). · 8e115f54
      Ashwin Agrawal 提交于
      GetTopTransactionId() bumps the xid counter, doesn't seem required for this
      function. For GPDB this function may be called by QE READER, causing XID
      assignment in reader which is a voilation. Hence instead use
      ReadNewTransactionId() to just read current value instead of bumping the value.
      8e115f54
  2. 31 3月, 2017 17 次提交
  3. 30 3月, 2017 2 次提交
    • T
      Removes non-working @wip tests (#2105) · bbf6b1ae
      Todd Sedano 提交于
      bbf6b1ae
    • D
      Remove slow CTAS from gp_create_table to speed up test · ade37378
      Daniel Gustafsson 提交于
      The gp_create_table suite tests that the maximum number of columns
      can be specified as a distribution clause (well, it really tests
      that CREATE TABLE allows the maximum number of columns since that
      check applies first), and then tests to CTAS from the resulting
      table. There is no reason to believe that there is a difference
      between CTAS with 1600 columns and CTAS with fewer columns. Remove
      this case to speed up the test significantly and also adjust the
      DROP TABLE clauses to match reality.
      ade37378