1. 25 9月, 2018 5 次提交
    • A
      Delete SIGUSR2 based fault injection logic in walreceiver. · fc008690
      Ashwin Agrawal 提交于
      Regular fault injection doesn't work for mirrors. Hence, using SIGUSR2 signal
      and on-disk file coupled with it just for testing a fault injection mechanism
      was coded. This seems very hacky and intrusive, hence plan is to get rid of the
      same. Most of the tests using this framework are found not useful as majority of
      code is upstream. Even if needs testing, better alternative would be explored.
      fc008690
    • A
      Remove remaining unused pieces of wal_consistency_checking. · c9dee15b
      Ashwin Agrawal 提交于
      Most of the backup block related modification for providing the
      wal_consistency_checking was removed as part of 9.3 merge. This was mainly done
      to avoid merge conflicts. The masking functions are still used by
      gp_replica_check tool to perform checking between primary and mirrors. But the
      online version of checking during each replay of record was let go. So, in this
      commit cleaning up remaining pieces which are not used. We will get back this in
      properly working condition when we catch up to upstream.
      c9dee15b
    • A
      Remove some unused and not implemented fault types. · c2bbca41
      Ashwin Agrawal 提交于
      Removing the fault types which do not have implementation. Or have
      implementation but doesn't seem usable. This will just help to have only working
      subset of faults. Like data corruption fault seems pretty useless. Even if
      needed then can be easily coded for specific usecase using the skip fault,
      instead of having special one defined for it.
      
      Fault type "fault" is redundant with "error" hence removing the same as well.
      c2bbca41
    • A
      Add gpdb specific files to .gitignore · 36d33485
      Ashwin Agrawal 提交于
      36d33485
    • D
      Fix volatile functions handling by ORCA · e17c6f9a
      Dhanashree Kashid 提交于
      Following commits have been cherry-picked again:
      
      b1f543f3.
      
      b0359e69.
      
      a341621d.
      
      The contrib/dblink tests were failing with ORCA after the above commits.
      The issue has been fixed now in ORCA v3.1.0. Hence we re-enabled these
      commits and bumping the ORCA version.
      e17c6f9a
  2. 24 9月, 2018 3 次提交
    • H
      Remove FIXME, accept that we won't have this assertion anymore. · 1d254cf1
      Heikki Linnakangas 提交于
      I couldn't find an easy way to make this assertion work, with the
      "flattened" range table in 9.3. The information needed for this is zapped
      away in add_rte_to_flat_rtable(). I think we can live without this
      assertion.
      1d254cf1
    • H
      Fix UPDATE RETURNING on distribution key columns. · 306b114b
      Heikki Linnakangas 提交于
      Updating a distribution key column is performed as a "split update", i.e.
      separate DELETE and INSERT operations, which may happen on different nodes.
      In case of RETURNING, the DELETE operation was also returning a row, and it
      was also incorrectly counted in the row count returned to the client, in
      the command tag (e.g. "UPDATE 2"). Fix, and add a regression test.
      
      Fixes https://github.com/greenplum-db/gpdb/issues/5839
      306b114b
    • H
      Refactor code in ProcessRepliesIfAny() to match upstream. · 9e5b20e8
      Heikki Linnakangas 提交于
      The reason we needed the FIXME pq_getmessage() call, marked with the
      FIXME comment, was that we were missing the pq_getmessage() call from
      ProcessStandbyMessage(), that the corresponding upstream version, at the
      point that we're caught up in the merge, had. I believe the reason it was
      missing from ProcessStandbyMessage() was that we had earlier backported
      upstream commit cd19848bd55. That commit removed the pq_getmessage() call
      from ProcessStandbyMessage(), and added one in ProcessRepliesIfAny(),
      instead.
      
      Clarify this by changing the code to match upstream commit cd19848bd55.
      (Except that we don't have pq_startmsgread() yet, that will arrive when
      we merge the rest of commit cd19848bd55.)
      9e5b20e8
  3. 23 9月, 2018 3 次提交
  4. 22 9月, 2018 6 次提交
    • J
      Revert "Add DEBUG mode to the explain_memory_verbosity GUC" · 984cd3b9
      Jesse Zhang 提交于
      Commit 825ca1e3 didn't seem to work well when we hook up ORCA's memory
      system to memory accounting. We are tripping multiple asserts in
      regression tests. The reg test failures seem to suggest we are
      double-free'ing somewhere (or incorrectly accounting). Reverting for now
      to get master back to green.
      
      This reverts commit 825ca1e3.
      984cd3b9
    • T
      Add DEBUG mode to the explain_memory_verbosity GUC · 825ca1e3
      Taylor Vesely 提交于
      The memory accounting system generates a new memory account for every
      execution node initialized in ExecInitNode. The address to these memory
      accounts is stored in the shortLivingMemoryAccountArray. If the memory
      allocated for shortLivingMemoryAccountArray is full, we will repalloc
      the array with double the number of available entries.
      
      After creating approximately 67000000 memory accounts, it will need to
      allocate more than 1GB of memory to increase the array size, and throw
      an ERROR, canceling the running query.
      
      PL/pgSQL and SQL functions will create new executors/plan nodes that
      must be tracked my the memory accounting system. This level of detail is
      not necessary for tracking memory leaks, and creating a separate memory
      account for every executor will use large amount of memory just to track
      these memory accounts.
      
      Instead of tracking millions of individual memory accounts, we
      consolidate any child executor account into a special 'X_NestedExecutor'
      account. If explain_memory_verbosity is set to 'detailed' and below,
      consolidate all child executors into this account.
      
      If more detail is needed for debugging, set explain_memory_verbosity to
      'debug', where, as was the previous behavior, every executor will be
      assigned its own MemoryAccountId.
      
      Originally we tried to remove nested execution accounts after they
      finish executing, but rolling over those accounts into a
      'X_NestedExecutor' account was impracticable to accomplish without the
      possibility of a future regression.
      
      If any accounts are created between nested executors that are not rolled
      over to an 'X_NestedExecutor' account, recording which accounts are
      rolled over can grow in the same way that the
      shortLivingMemoryAccountArray is growing today, and would also grow too
      large to reasonably fit in memory.
      
      If we were to iterate through the SharedHeaders every time that we
      finish a nested executor, it is not likely to be very performant.
      
      While we were at it, convert some of the convenience macros dealing with
      memory accounting for executor / planner node into functions, and move
      them out of memory accounting header files into the sole callers'
      compilation units.
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
      Co-authored-by: NAdam Berlin <aberlin@pivotal.io>
      Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
      Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
      825ca1e3
    • T
      Move memoryAccountId out of PlannedStmt/Plan Nodes · 7c9cc053
      Taylor Vesely 提交于
      Functions using SQL and PL/pgSQL will plan and execute arbitrary SQL
      inside a running query. The first time we initialize a plan for an SQL
      block, the memory accounting system creates a new memory account for
      each Executor/Node.  In the case that we are executing a cached plan,
      (i.e.  plancache.c) the memory accounts will have already been assigned
      in a previous execution of the plan.
      
      As a result, when explain_memory_verbosity is set to 'detail', it is not
      clear what memory account corresponds to which executor. Instead, move
      the memoryAccountId into PlanState/QueryDesc, which will insure that
      every time we initialize an executor, it will be assigned a unique
      memoryAccountId.
      Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
      7c9cc053
    • H
      Remove FIXME in RemoveLocalLock, it's alright. · 9e57124b
      Heikki Linnakangas 提交于
      The FIXME was added to GPDB in commit f86622d9, which backported the
      local cache of resource owners attached to LOCALLOCK. I think the comment
      was added, because in the upstream commit that added the cache, the
      upstream didn't thave the check guarding the pfree() yet. It was added
      later in upstream, too, in commit 7e6e3bdd3c, and that had already been
      backported to GPDB. So it's alright, the guard on the pfree is a good thing
      to have, and there's nothing further to do here.
      9e57124b
    • H
      Change pretty-printing of expressions in EXPLAIN to match upstream. · 4c54c894
      Heikki Linnakangas 提交于
      We had changed this in GPDB, to print less parens. That's fine and dandy,
      but it hardly seems worth it to carry a diff vs upstream for this. Which
      format is better, is a matter of taste. The extra parens make some
      expressions more clear, but OTOH, it's unnecessarily verbose for simple
      expressions. Let's follow the upstream on this.
      
      These changes were made to GPDB back in 2006, as part of backporting
      to EXPLAIN-related patches from PostgreSQL 8.2. But I didn't see any
      explanation for this particular change in output in that commit message.
      
      It's nice to match upstream, to make merging easier. However, this won't
      make much difference to that: almost all EXPLAIN plans in regression
      tests are different from upstream anyway, because GPDB needs Motion nodes
      for most queries. But every little helps.
      4c54c894
    • H
      Remove commented-out block of macOS makefile stuff. · c5d875b5
      Heikki Linnakangas 提交于
      I don't understand what all this was about, but people have compiled GPDB
      successfully after the merge commit, where this was commented out, so
      apparently it's not needed.
      c5d875b5
  5. 21 9月, 2018 18 次提交
    • H
      Remove duplicated code to handle SeqScan, AppendOnlyScan and AOCSScan. · ff8161a2
      Heikki Linnakangas 提交于
      They were all treated the same, with the SeqScan code being duplicated
      for AppendOnlyScans and AOCSScans. That is a merge hazard: if some code
      is changed for SeqScans, we would have to remember to manually update
      the other copies. Small differences in the code had already crept up,
      although given that everything worked, I guess it had no effect. Or
      only had a small effect on the computed costs.
      
      To avoid the duplication, use SeqScan for all of them. Also get rid of
      TableScan as a separate node type, and have ORCA translator also create
      SeqScans.
      
      The executor for SeqScan node can handle heap, AO and AOCS tables, because
      we're not actually using the upstream SeqScan code for it. We're using the
      GPDB code in nodeTableScan.c, and a TableScanState, rather than
      SeqScanState, as the executor node. That's how it worked before this patch
      already, what this patch changes is that we now use SeqScan *before* the
      executor phase, instead of SeqScan/AppendOnlyScan/AOCSScan/TableScan.
      
      To avoid having to change all the expected outputs for tests that use
      EXPLAIN, add code to still print the SeqScan as "Seq Scan", "Table Scan",
      "Append-only Scan" or "Append-only Columnar Scan", depending on whether
      the plan was generated by ORCA, and what kind of a table it is.
      ff8161a2
    • H
      Move UnpackCheckPointRecord to xlogdesc.c, to avoid duplicating it. · 16343336
      Heikki Linnakangas 提交于
      As noted in the FIXME, having two copies of the function is bad. It's easy
      to avoid the duplication, if we just put it in xlogdesc.c, so that it's
      available to xlog_desc() in client programs, too.
      16343336
    • D
      Remove unused variable · 414531a6
      Daniel Gustafsson 提交于
      Fixes compiler warning on unused variable which was left over in the
      9.3 merge.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      414531a6
    • A
      Avoid inconsistent type declaration · e52f9c2d
      Alvaro Herrera 提交于
      Clang 3.3 correctly complains that a variable of type enum
      MultiXactStatus cannot hold a value of -1, which makes sense.  Change
      the declared type of the variable to int instead, and apply casting as
      necessary to avoid the warning.
      
      Per notice from Andres Freund
      e52f9c2d
    • H
      Merge with PostgreSQL 9.3 (up to almost 9.3beta2) · c7649f18
      Heikki Linnakangas 提交于
      Merge with PostgreSQL, up to the point where the REL9_3_STABLE branch was
      created, and 9.4 development started on the PostgreSQL master branch. That
      is almost up to 9.3beta2.
      
      Notable upstream changes, from a GPDB point of view:
      
      * LATERAL support. Mostly works in GPDB now, although performance might not
        be very good. LATERAL subqueries, except for degenerate cases that can be
        made non-LATERAL during optimization, typically use nested loop joins.
        Unless the data distribution is the same on both sides of the join, GPDB
        needs to add Motion nodes, and cannot push down the outer query parameter
        to the inner side through the motion. That is the same problem we have
        with SubPlans and nested loop joins in general, but it happens frequently
        with LATERAL. Also, there are a couple of cases, covered by the upstream
        regression tests, where the planner currently throws an error. They have
        been disabled and marked with GPDB_93_MERGE_FIXME comments, and will need
        to be investigated later. Also, no ORCA support for LATERAL yet.
      
      * Materialized views. They have not been made to work in GPDB yet. CREATE
        MATERIALIZED VIEW works, but REFRESH MATERIALIZED VIEW does not. The
        'matviews' test has been temporarily disabled, until that's fixed. There
        is a GPDB_93_MERGE_FIXME comment about this too.
      
      * Support for background worker processes. Nothing special was done about
        them in the merge, but we could now make use of them for all the various
        GPDB-specific background processes, like the FTS prober and gpmon
        processes.
      
      * Support for writable foreign tables was introduced. I believe foreign
        tables now have all the same functionality, at a high level, as external
        tables, so we could start merging the two concepts. But this merge commit
        doesn't do anything about that yet, external tables and foreign tables
        are still two entirely different beasts.
      
      * A lot of expected output churn, thanks to a few upstream changes. We no
        longer print a NOTICE on implicitly created indexes and sequences (commit
        d7c73484), and the rules on when table aliases are printed were changed
        (commit 11e13185).
      
      * Caught up to a bunch of features that we had already backported from 9.3:
        data page checksums, numeric datatype speedups, COPY FROM/TO PROGRAM, and
        pg_upgrade as whole.
      
      A couple of other noteworthy changes:
      
      * contrib/xlogdump utility is removed, in favor of the upstream
        contrib/pg_xlogdump utility.
      
      * Removed "idle session timeout" hook. The current implementation was badly
        broken by upstream refactoring of timeout handling (commit f34c68f0).
        We'll probably need to re-introduce it in some form, but it will look
        quite different, to make it fit more nicely with the new timeout APIs.
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      Co-authored-by: NAsim R P <apraveen@pivotal.io>
      Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
      Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
      Co-authored-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      Co-authored-by: NJacob Champion <pchampion@pivotal.io>
      Co-authored-by: NJinbao Chen <jinchen@pivotal.io>
      Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
      Co-authored-by: NPaul Guo <paulguo@gmail.com>
      Co-authored-by: NRichard Guo <guofenglinux@gmail.com>
      Co-authored-by: NShaoqi Bai <sbai@pivotal.io>
      c7649f18
    • A
      Fix travis build error, old apr tarball doesn't exist anymore · 27adcf92
      Adam Lee 提交于
      ```
      $ wget http://ftp.jaist.ac.jp/pub/apache/apr/${APR}.tar.gz
      --2018-09-21 07:16:24--  http://ftp.jaist.ac.jp/pub/apache/apr/apr-1.6.3.tar.gz
      Resolving ftp.jaist.ac.jp (ftp.jaist.ac.jp)... 150.65.7.130, 2001:df0:2ed:feed::feed
      Connecting to ftp.jaist.ac.jp (ftp.jaist.ac.jp)|150.65.7.130|:80... connected.
      HTTP request sent, awaiting response... 404 Not Found
      2018-09-21 07:16:25 ERROR 404: Not Found.
      ```
      27adcf92
    • A
      Fix COPY SEGV caused by uninitialized variables · 688a43f0
      Adam Lee 提交于
      It happens if the copy command errors out before assigning
      dispatcherState. Initialize the dispatcherState as NULL to fix it,
      palloc0() to avoid future new member issues.
      
      5X has no such problem.
      
      ```
      (gdb) c
      Continuing.
      Detaching after fork from child process 25843.
      
      Program received signal SIGSEGV, Segmentation fault.
      0x0000000000aa04dd in getCdbCopyPrimaryGang (c=0x23d4150) at cdbcopy.c:44
      44              return (Gang *)linitial(c->dispatcherState->allocatedGangs);
      (gdb) bt
      \#0  0x0000000000aa04dd in getCdbCopyPrimaryGang (c=0x23d4150) at cdbcopy.c:44
      \#1  0x0000000000aa12d8 in cdbCopyEndAndFetchRejectNum (c=0x23d4150, total_rows_completed=0x0, abort_msg=0xd0c8f8 "aborting COPY in QE due to error in QD") at cdbcopy.c:642
      \#...
      (gdb) p c->dispatcherState
      $1 = (struct CdbDispatcherState *) 0x100000000
      ```
      688a43f0
    • H
      Use psql's unaligned format in EXPLAIN tests, to make it less brittle. · 43ccd3d5
      Heikki Linnakangas 提交于
      In aligned format, there is an end-of-line marker at the end of each line,
      and its position depends on the longest line. If the width changes, all
      lines need to be adjusted for the moved end-of-line-marker.
      
      While testing this, we found out that 'atmsort' had been doing bad things
      to the YAML output before:
      
          -- Check Explain YAML output
          EXPLAIN (FORMAT YAML) SELECT * from boxes LEFT JOIN apples ON apples.id = boxes.apple_id LEFT JOIN box_locations ON box_locations.id = boxes.location_id;
          QUERY PLAN
          ___________
          {
            'id' => 1,
            'short' => '- Plan:                                                  +'
          }
          GP_IGNORE:(1 row)
      
      In other worse, we were not comparing the output at all, except for that one
      line that says "Plan:". The access plan for one of the queries had changed,
      from a Left Join to a Right Join, and we still had the old plan memorized
      in expected output, but the test was passing because atmsort hid the issue.
      This commit fixes the expected output for the new plan.
      43ccd3d5
    • H
      Remove check for NOT NULLable column from ORCA translation of INSERT values. · e89be84b
      Heikki Linnakangas 提交于
      When creating an ORCA plan for "INSERT ... (<col list>) VALUES (<values>)"
      statement, the ORCA translator performed NULL checks for any columns not
      listed in the column list. Nothing wrong with that per se, but we needed
      to keep the error messages in sync, or we'd get regression test failures
      caused by different messages. To simplify that, remove the check from
      ORCA translator, and rely on the execution time check.
      
      We bumped into this while working on the 9.3 merge, because 9.3 added
      DETAIL to the error message in executor:
      
      postgres=# create table notnulls (a text NOT NULL, b text NOT NULL);
      CREATE TABLE
      postgres=# insert into notnulls (a) values ('x');
      ERROR:  null value in column "b" violates not-null constraint
      postgres=# insert into notnulls (a,b) values ('x', NULL);
      ERROR:  null value in column "b" violates not-null constraint  (seg2 127.0.0.1:40002 pid=26547)
      DETAIL:  Failing row contains (x, null).
      
      Doing this now will avoid that inconsistency in the merge.
      
      One little difference with this is that EXPLAIN on an insert like above
      now works, and you only get the error when you try to execute it. Before,
      with ORCA, even EXPLAIN would throw the error.
      e89be84b
    • H
      Remove gpfdist --sslclean option (#5800) · d94bebb5
      Huiliang.liu 提交于
      gpfdist --sslclean option is a platform related patch for Solaris system.
      gpfdist delays cleaning ssl buffer for some seconds which is configured
      by sslclean option.
      GPDB6 doesn't support Solaris now. We don't think that solution has
      benefit for other platforms, so we remove --sslclean option.
      
      Have Verified this patch manually and the default test cases cover this change.
      d94bebb5
    • J
      Unify all error report to ereport in cdbutil.c:getCdbComponentInfo. (#5802) · 74270ccc
      Jialun 提交于
      * Unify all error report to ereport in cdbutil.c:getCdbComponentInfo.
      
      * Change the message to lower case which is the usual PostgreSQL
      style for error messages.
      74270ccc
    • B
      Remove a FIXME comment about the need to have EvalPlanQual recheck functions for AO table scans · 12f96ee7
      BaiShaoqi 提交于
      Reviewed by Paul and heikki, ideas brought by heikki
      12f96ee7
    • T
      Fix pg_stat_activity show wrong session id after session reset bug (#5757) · ac54faad
      Teng Zhang 提交于
      * Fix pg_stat_activity show wrong session id after session reset bug
      
      Currently if a session is reset because of some error such as OOM,
      after call CheckForResetSession, gp_session_id will bump to a new one,
      but sess_id in pg_stat_activity remains unchanged and show the wrong number.
      This commit changes sess_id in pg_stat_activity, once a session is reset.
      
      * Refactor test using gp_execute_on_server to trigger session reset
      ac54faad
    • D
      Revert "Bump ORCA version to 3.0.0" · fcadfe88
      Dhanashree Kashid 提交于
      Revert following commits related to ORCA version 3.0.0
      
      b1f543f3.
      
      b0359e69.
      
      a341621d.
      fcadfe88
    • S
      Bump ORCA version to 3.0.0 · b1f543f3
      Sambitesh Dash 提交于
      Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
      b1f543f3
    • S
      Randomize output segment for non-master gather motion · b0359e69
      Sambitesh Dash 提交于
      Via https://github.com/greenplum-db/gporca/pull/400, ORCA will optimize
      DML queries by enforcing a gather on segment instead of master, whenever
      possible.
      
      Previous to this commit, ORCA always picked the first segment to gather
      on while translating the DXL-GatherMotion node to GPDB motion node.
      
      This commit uses GPDB's hash function to select the segment to gather
      on, in a round-robin fashion starting with a random segment index. This
      will ensure that concurrent DML queries issued via a same session, will
      be gathered on different segments to distribute the workload.
      Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
      b0359e69
    • S
      Introduce optimizer_enable_gather_on_segment_for_DML GUC · a341621d
      Sambitesh Dash 提交于
      When ON, ORCA will optimize DML queries by enforcing a non-master gather
      whenever possible. When off, a gather on master will be enforced
      instead.
      
      Default value will be ON.
      
      Also add new tests to ensure sane behavior when this optimization is
      turned on and fix the existing tests.
      Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
      Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
      a341621d
    • A
      Improve PR pipeline feedback loop · 723fbcfb
      Adam Berlin 提交于
      Previously we were running all tests for all operating systems. This
      commit reduces the running tests down to *only* ICW for both planner and
      ORCA running *only* on Ubuntu. This reduces the chance that a flakey
      test causes a false-negative run of the PR pipeline. Less tests to run
      should also improve the speed of the pipeline, giving us faster
      feedback, while giving us most of the confidence we need.  Exhaustive
      testing will be done downstream after a PR has been merged.
      Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
      723fbcfb
  6. 20 9月, 2018 5 次提交
    • J
      Fix ao_upgrade tests and add them to the schedule · 2b96acd2
      Jacob Champion 提交于
      9.1 added a new, more compact "short" format to the numeric datatype.
      This format wasn't handled by the ao_upgrade test in isolation2, so it
      failed -- but the pipeline was still green because I forgot to add the
      new test to the schedule in 54895f54.
      
      To fix the issue, add a new helper which will force any Numeric back to
      the legacy long format, and call that from convertNumericToGPDB4() in
      the ao_upgrade test. And add the test to the schedule, so we don't have
      to do this again.
      Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
      2b96acd2
    • H
      Fix and clean up db/relation/tablespace size functions. · f8a80aeb
      Heikki Linnakangas 提交于
      This fixes several small bugs:
      
      - Schema-qualify the functions in all queries.
      
      - Quote database and tablespace names correctly in the dispatched
        queries.
      
      - In the variants that take OID, also dispatch the OID rather than the
        resolved name. This avoids having to deal with quoting schema and table
        names in the query, and seems like the right thing to do anyway.
      
      - Dispatch pg_table_size() pg_indexes_size() variants. These were added
        in PostgreSQL 9.0, but we missed modifying them in the merge, the same
        way that we have modified all the other variants.
      
      Also, refactor the internal function used to dispatch the pg_*_size()
      calls to use CdbDispatchCommand directly, instead of using SPI and the
      gp_dist_random('gp_id') trick. Seems more straightforward, although I
      believe that trick should've worked, too.
      
      Add tests. We didn't have any bespoken tests for these functions, although
      we used some of the variants in other tests.
      Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
      f8a80aeb
    • H
      Remove the printing of overflowed value, when scale is exceeded. · 4b31e46f
      Heikki Linnakangas 提交于
      Printing the value was added in GPDB, back in 2007. The commit message of
      that change (in the historical pre-open-sourcing git repository) said:
      
          Merge forward from Release-3_0_0-branch. Update comment block.
          Tidy numeric_to_pos_int8_trunc.
      
      That wasn't not very helpful...
      
      Arguably, printing the value can be useful, but if so, we should submit
      this change to upstream. I don't think it's worth the trouble, though, so
      I suggest that we just revert this to the way it is in the upstream. The
      reason I'm doing this now is that this caused merge conflicts in the 9.3
      merge, that we're working on right now. We could probably fix the conflict
      in a way that keeps the extra message, but it's simpler to just drop it.
      4b31e46f
    • H
      Remove some leftover initGpmonPkt* functions. · 46b8293f
      Heikki Linnakangas 提交于
      Commit c1690010 removed most of these, but missed these few in GPDB-
      specific executor node. These are no longer needed, just like all the ones
      that were removed in commit c1690010.
      46b8293f
    • B
      Fix subquery with column alias `zero` produces wrong result (#5790) · b70b0086
      BaiShaoqi 提交于
      Reviewed and ideas brought by heikki
      b70b0086