1. 29 9月, 2020 1 次提交
    • J
      Format ORCA and GPOPT. · 219fe0c4
      Jesse Zhang 提交于
      The canonical config file is in src/backend/gpopt/.clang-format (instead
      of under the non-existent src/backend/gporca), I've created one (instead
      of two) symlink, for GPOPT headers. Care has been taken to repoint the
      symlink to the canonical config under gpopt, instead of gpopt as it is
      under HEAD.
      
      This is spiritually a cherry-pick of commit 2f7dd76c.
      (cherry picked from commit 2f7dd76c)
      219fe0c4
  2. 18 9月, 2020 1 次提交
    • D
      Align Orca relhasindex behavior with Planner (#10788) · 8083a046
      David Kimura 提交于
      Function `RelationGetIndexList()` does not filter out invalid indexes.
      That responsiblity is left to the caller (e.g. `get_relation_info()`).
      Issue is that Orca was not checking index validity.
      
      This commit also introduces an optimization to Orca that is already used
      in Planner whereby we first check relhasindex before checking pg_index.
      
      (cherry picked from commit b011c351)
      8083a046
  3. 02 6月, 2020 1 次提交
    • H
      Bump Orca version to 3.103, support "NDV-preserving" function and op property (#10090) · f16e6148
      Hans Zeller 提交于
      Orca uses this property for cardinality estimation of joins.
      For example, a join predicate foo join bar on foo.a = upper(bar.b)
      will have a cardinality estimate similar to foo join bar on foo.a = bar.b.
      
      Other functions, like foo join bar on foo.a = substring(bar.b, 1, 1)
      won't be treated that way, since they are more likely to have a greater
      effect on join cardinalities.
      
      Since this is specific to ORCA, we use logic in the translator to determine
      whether a function or operator is NDV-preserving. Right now, we consider
      a very limited set of operators, we may add more at a later time.
      f16e6148
  4. 05 2月, 2020 1 次提交
  5. 05 10月, 2019 1 次提交
    • C
      Bump ORCA version to 3.74.0, Introduce PallocMemoryPool for use in GPORCA (#8747) · a3266308
      Chris Hajas 提交于
      We introduce a new type of memory pool and memory pool manager:
      CMemoryPoolPalloc and CMemoryPoolPallocManager
      
      The motivation for this PR is to improve memory allocation/deallocation
      performance when using GPDB allocators. Additionally, we would like to
      use the GPDB memory allocators by default (change the default for
      optimizer_use_gpdb_allocators to on), to prevent ORCA from crashing when
      we run out of memory (OOM). However, with the current way of doing
      things, doing so would add around 10 % performance overhead to ORCA.
      
      CMemoryPoolPallocManager overrides the default CMemoryPoolManager in
      ORCA, and instead creates a CMemoryPoolPalloc memory pool instead of a
      CMemoryPoolTracker. In CMemoryPoolPalloc, we now call MemoryContextAlloc
      and pfree instead of gp_malloc and gp_free, and we don’t do any memory
      accounting.
      
      So where does the performance improvement come from? Previously, we
      would (essentially) pass in gp_malloc and gp_free to an underlying
      allocation structure (which has been removed on the ORCA side). However,
      we would add additional headers and overhead to maintain a list of all
      of these allocations. When tearing down the memory pool, we would
      iterate through the list of allocations and explicitly free each one. So
      we would end up doing overhead on the ORCA side, AND the GPDB side.
      However, the overhead on both sides was quite expensive!
      
      If you want to compare against the previous implementation, see the
      Allocate and Teardown functions in CMemoryPoolTracker.
      
      With this PR, we improve optimization time by ~15% on average and up to
      30-40% on some queries which are memory intensive.
      
      This PR does remove memory accounting in ORCA. This was only enabled
      when the optimizer_use_gpdb_allocators GUC was set. By setting
      `optimizer_use_gpdb_allocators`, we still capture the memory used when
      optimizing a query in ORCA, without the overhead of the memory
      accounting framework.
      
      Additionally, Add a top level ORCA context where new contexts are created
      
      The OptimizerMemoryContext is initialized in InitPostgres(). For each
      memory pool in ORCA, a new memory context is created in
      OptimizerMemoryContext.
      Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io>
      Co-authored-by: NChris Hajas <chajas@pivotal.io>
      a3266308
  6. 13 9月, 2019 1 次提交
    • A
      Pass stats for UUID columns to ORCA · bbb529bc
      Abhijit Subramanya 提交于
      Previously we would not pass the statistics for UUID columns to ORCA. This
      would cause cardinality mis-estimation and hence would cause ORCA to pick a bad
      plan. This patch fixes the issue by passing in the statistics for UUID columns.
      bbb529bc
  7. 24 7月, 2019 1 次提交
    • A
      Bump ORCA version to 3.59.0 (#8134) · d973f024
      Ashuka Xue 提交于
      This commit corresponds to the ORCA commit "Implement Full Merge Join"
      
      In GPDB 5, merge join is disabled, but the following changes were made
      to continue allowing compilation of GPDB 5 with ORCA.
      
      1. Translator changes for Merge Join.
      2. Add IsOpMergeJoinable() and GetMergeJoinOpFamilies()  wrappers.
      d973f024
  8. 01 3月, 2019 1 次提交
  9. 27 9月, 2018 1 次提交
  10. 16 8月, 2018 1 次提交
  11. 26 7月, 2018 1 次提交
    • O
      ORCA now mimics planner when it comes to empty stats · 2fad570f
      Omer Arap 提交于
      When there is no stats available for any table, ORCA was treating it as an
      empty table while planning. On the other hand planner is utilizing a guc
      `gp_enable_relsize_collection` to obtain the estimated size of the table, but
      no other statistics. This commit enables ORCA to have the same behavior as
      planner when the guc is set.
      Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
      2fad570f
  12. 02 2月, 2018 1 次提交
    • D
      Fix get_attstatsslot()/free_attstatsslot() when statistics are broken. · 5bc15b17
      Dhanashree Kashid 提交于
      In scenarios where pg_statistic contains wrong statistic entry for an
      attribute, or when the statistics on a particular attribute are broken,
      for e.g the type of elements stored in stavalues<1/2/3> is different
      than the actual attribute type or when there are holes in the attribute
      numbers due to adding/dropping columns; following two APIs fail because
      they relied on the attribute type sent by the caller:
      
      - get_attstatsslot() : Extracts the contents (numbers/frequency array and
      values array) of the requested statistic slot (MCV, HISTOGRAM etc). If the
      attribute is pass-by-reference or if the attribute is of toastable type
      (varlena types)then it returns a copy allocated with palloc()
      - free_attstatsslot() : Frees any palloc'd data by get_attstatsslot()
      
      This problem was fixed in upstream 8.3
      (8c21b4e9) for get_attstatsslot(),
      wherein the actual element type of the array will be used for
      deconstructing it rather that using caller passed OID.
      free_attstatsslot() still depends on the type oid sent by caller.
      
      However the issue still exists for free_attstatsslot() where it crashes while
      freeing the array. The crash happened because the caller sent type OID was of
      type TEXT meaning this a varlena type and hence free_attstatsslot() attempted
      to free the datum; however due to the broken slot the datums extracted from
      values array were of fixed length type such as int. We considered the int value
      as memory address and crashed while freeing it.
      
      This commit brings in a following fix from upstream 10 which redesigns
      get_attstatsslot()/free_attstatsslot() such than they robust to scenarios like
      these.
      
      commit 9aab83fc
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Sat May 13 15:14:39 2017 -0400
      
          Redesign get_attstatsslot()/free_attstatsslot() for more safety and speed.
      
          The mess cleaned up in commit da075960 is clear evidence that it's a
          bug hazard to expect the caller of get_attstatsslot()/free_attstatsslot()
          to provide the correct type OID for the array elements in the slot.
          Moreover, we weren't even getting any performance benefit from that,
          since get_attstatsslot() was extracting the real type OID from the array
          anyway.  So we ought to get rid of that requirement; indeed, it would
          make more sense for get_attstatsslot() to pass back the type OID it found,
          in case the caller isn't sure what to expect, which is likely in binary-
          compatible-operator cases.
      
          Another problem with the current implementation is that if the stats array
          element type is pass-by-reference, we incur a palloc/memcpy/pfree cycle
          for each element.  That seemed acceptable when the code was written because
          we were targeting O(10) array sizes --- but these days, stats arrays are
          almost always bigger than that, sometimes much bigger.  We can save a
          significant number of cycles by doing one palloc/memcpy/pfree of the whole
          array.  Indeed, in the now-probably-common case where the array is toasted,
          that happens anyway so this method is basically free.  (Note: although the
          catcache code will inline any out-of-line toasted values, it doesn't
          decompress them.  At the other end of the size range, it doesn't expand
          short-header datums either.  In either case, DatumGetArrayTypeP would have
          to make a copy.  We do end up using an extra array copy step if the element
          type is pass-by-value and the array length is neither small enough for a
          short header nor large enough to have suffered compression.  But that
          seems like a very acceptable price for winning in pass-by-ref cases.)
      
          Hence, redesign to take these insights into account.  While at it,
          convert to an API in which we fill a struct rather than passing a bunch
          of pointers to individual output arguments.  That will make it less
          painful if we ever want further expansion of what get_attstatsslot can
          pass back.
      
          It's certainly arguable that this is new development and not something to
          push post-feature-freeze.  However, I view it as primarily bug-proofing
          and therefore something that's better to have sooner not later.  Since
          we aren't quite at beta phase yet, let's put it in.
      
          Discussion: https://postgr.es/m/16364.1494520862@sss.pgh.pa.us
      
      Most of the changes are same as the upstream commit with following additional
      changes:
      - Relcache translator changes in ORCA.
      - Added a test that simulates the crash due to broken stats
      - get_attstatsslot() contains an extra check for empty slot array which existed
      in master but is not there in upstream.
      Signed-off-by: NAbhijit Subramanya <asubramanya@pivotal.io>
      (cherry picked from commit ae06d7b0)
      5bc15b17
  13. 21 12月, 2017 1 次提交
    • S
      Reimplement ORCA interrupts using a callback function · fdbe5bbb
      Shreedhar Hardikar 提交于
      As pointed out by Heikki, maintaining another variable to match one in
      the database system will be error-prone and cumbersome, especially while
      merging with upstream. This commit initializes ORCA with a pointer to a
      GPDB function that returns true when QueryCancelPending or
      ProcDiePending is set. This way we no longer have to micro-manage
      setting and re-setting some internal ORCA variable, or touch signal
      handlers.
      
      This commit also reverts commit 0dfd0ebc "Support optimization interrupts
      in ORCA" and reuses tests already pushed by 916f460f and 0dfd0ebc.
      fdbe5bbb
  14. 26 9月, 2017 1 次提交
    • S
      Enable ORCA to be tracked by Mem Accounting (#3378) · 010f7025
      sambitesh 提交于
      Before this commit all memory allocations made by ORCA/GPOS were a
      blackbox to GPDB. However the ground work had been in place to allow
      GPDB's Memory Accounting Framework to track memory consumption by ORCA.
      This commit introduces two new functions
      Ext_OptimizerAlloc and Ext_OptimizerFree which
      pass through their parameters to gp_malloc and gp_free and do some bookeeping
      against the Optimizer Memory Account. This introduces very little
      overhead to the GPOS memory management framework.
      Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
      Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
      010f7025
  15. 10 8月, 2017 1 次提交
    • K
      Fix Relcache Translator to send CoercePath info (#2842) · cc799db4
      khannaekta 提交于
      Fix Relcache Translator to send CoercePath info
      
      Currently ORCA crashes while executing following query:
      ```
      CREATE TABLE FOO(a integer NOT NULL, b double precision[]);
      SELECT b FROM foo
      UNION ALL
      SELECT ARRAY[90, 90] as Cont_features;
      ```
      
      In the query, we are appending an integer array (ARRAY[90, 90]) to a double
      precision array (foo.b) and hence we need to apply a cast on ARRAY[90, 90] to
      generate ARRAY[90, 90]::double precision[].
      In gpdb5 there is not direct function available that can cast array of any type
      to array of any other type.
      So in relcache to dxl translator we look into the array elements and get their type
      and try to find a cast function for them.  For this query, source type is 23 i.e.
      integer and destination type is 701 i.e. double precision and we try to find if
      we have a conversion function for 23 -> 701. Since that is available we send
      that function to ORCA as follows:
      ```
      <dxl:MDCast Mdid="3.1007.1.0;1022.1.0"
      Name="float8" BinaryCoercible="false" SourceTypeId="0.1007.1.0"
      DestinationTypeId="0.1022.1.0" CastFuncId="0.316.1.0"/>
      ```
      Here we are misinforming ORCA by specifying that function with id 316 is available
      to convert type 1007 i.e. integer array to 1022 i.e. double precision array.
      However Function id 316 is simple int4 to float8 conversion function and it CAN NOT
      convert an array of int4 to array of double precision. ORCA generates a plan
      using this function but executor crashes while executing this function because
      this function can not handle arrays.
      
      This commit fixes this issue by passing a ArrayCoercePath info to ORCA.
      In Relcache Translator, The appropriate cast function is retrieved in `gpdb::FCastFunc()`
      which relies on `find_coercion_pathway()` to provide the cast function oid given the src
      and dest types.
      
      `find_coercion_pathway()` does not just determines the cast function to be used but
      also determines the coercion path; however we ignored the coercision path
      and generate a simple Cast Metadata Object.
      
      With this commit, we now pass the pathtype to relcache translator and
      generate ArrayCoerceCast Metadata object depending on the coercion path.
      
      In ORCA, when the dxl is translated to expression, we check the path type along with
      the cast function and generate `CScalarArrayCoerceExpr` if the path type is
      array coerce path; otherwise we generate simple `CScalaraCast`.
      
      Please check the corresponding ORCA PR.
      
      Bump ORCA version to 2.40
      Signed-off-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
      cc799db4
  16. 09 8月, 2017 1 次提交
    • B
      [#149699023] Handle interrupts in ORCA to avoid crashes · 17322684
      Bhuvnesh Chaudhary 提交于
      In ORCA, we donot process interrupts during planning stage, however
      if there are elog/ereport (which further calls errfinish) statements to
      print additional messages we prematurely exit out the planning stage
      without cleaning up the memory pools leading to inconsistent memory pool
      state. This results in crashes for the subsequent queries.
      
      This commit fixes the issue by handling interrupts while
      printing messages using elog/ereport in ORCA.
      Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
      17322684
  17. 25 4月, 2017 1 次提交
    • H
      Transform small Array constants to ArrayExprs. · 9a817d45
      Heikki Linnakangas 提交于
      ORCA can do some optimizations - partition pruning at least - if it can
      "see" into the elements of an array in a ScalarArrayOpExpr. For example, if
      you have a qual like "column IN (1, 2, 3)", and the table is partitioned on
      column, it can eliminate partitions that don't hold those values. The
      IN-clause is converted into an ScalarArrayOpExpr, so that is really
      equivalent to "column = ANY <array>"
      
      However, ORCA doesn't know how to extract elements from an array-typed
      Const, so it can only do that if the array in the ScalarArrayOpExpr is
      an ArrayExpr. Normally, eval_const_expressions() simplifies any ArrayExprs
      into Const, if all the elements are constants, but we had disabled that
      when ORCA was used, to keep the ArrayExprs visible to it.
      
      There are a couple of reasons why that was not a very good solution. First,
      while we refrain from converting an ArrayExpr to an array Const, it doesn't
      help if the argument was an array Const to begin with. The "x IN (1,2,3)"
      construct is converted to an ArrayExpr by the parser, but we would miss the
      opportunity if it's written as "x = ANY ('{1,2,3}'::int[])" instead.
      Secondly, by not simplifying the ArrayExpr, we miss the opportunity to
      simplify the expression further. For example, if you have a qual like
      "1 IN (1,2)", we can evaluate that completely at plan time to 'true', but
      we would not do that with ORCA because the ArrayExpr was not simplified.
      
      To be able to also optimize those cases, and to slighty reduce our diff
      vs upstream in clauses.c, always simplify ArrayExprs to Consts, when
      possible. To compensate, so that ORCA still sees ArrayExprs rather than
      array Consts (in those cases where it matters), when a ScalarArrayOpExpr
      is handed over to ORCA, we check if the argument array is a Const, and
      convert it (back) to an ArrayExpr if it is.
      Signed-off-by: NJemish Patel <jpatel@pivotal.io>
      9a817d45
  18. 19 4月, 2017 1 次提交
  19. 14 4月, 2017 1 次提交
  20. 01 4月, 2017 1 次提交
    • F
      Rule based partition selection for list (sub)partitions (#2076) · 5cecfcd1
      foyzur 提交于
      GPDB supports range and list partitions. Range partitions are represented as a set of rules. Each rule defines the boundaries of a part. E.g., a rule might say that a part contains all values between (0, 5], where left bound is 0 exclusive, but the right bound is 5, inclusive. List partitions are defined by a list of values that the part will contain. 
      
      ORCA uses the above rule definition to generate expressions that determine which partitions need to be scanned. These expressions are of the following types:
      
      1. Equality predicate as in PartitionSelectorState->levelEqExpressions: If we have a simple equality on partitioning key (e.g., part_key = 1).
      
      2. General predicate as in PartitionSelectorState->levelExpressions: If we need more complex composition, including non-equality such as part_key > 1.
      
      Note:  We also have residual predicate, which the optimizer currently doesn't use. We are planning to remove this dead code soon.
      
      Prior to  this PR, ORCA was treating both range and list partitions as range partitions. This meant that each list part will be converted to a set of list values and each of these values will become a single point range partition.
      
      E.g., consider the DDL:
      
      ```sql
      CREATE TABLE DATE_PARTS (id int, year int, month int, day int, region text)
      DISTRIBUTED BY (id)
      PARTITION BY RANGE (year)
          SUBPARTITION BY LIST (month)
             SUBPARTITION TEMPLATE (
              SUBPARTITION Q1 VALUES (1, 2, 3), 
              SUBPARTITION Q2 VALUES (4 ,5 ,6),
              SUBPARTITION Q3 VALUES (7, 8, 9),
              SUBPARTITION Q4 VALUES (10, 11, 12),
              DEFAULT SUBPARTITION other_months )
      ( START (2002) END (2012) EVERY (1), 
        DEFAULT PARTITION outlying_years );
      ```
      
      Here we partition the months as list partition using quarters. So, each of the list part will contain three months. Now consider a query on this table:
      
      ```sql
      select * from DATE_PARTS where month between 1 and 3;
      ```
      
      Prior to this ORCA generated plan would consider each value of the Q1 as a separate range part with just one point range. I.e., we will have 3 virtual parts to evaluate for just one Q1: [1], [2], [3]. This approach is inefficient. The problem is further exacerbated when we have multi-level partitioning. Consider the list part of the above example. We have only 4 rules for 4 different quarters, but we will have 12 different virtual rule (aka constraints). For each such constraint, we will then evaluate the entire subtree of partitions.
      
      After this PR, we no longer decompose rules into constraints for list parts and then derive single point virtual range partitions based on those constraints. Rather, the new ORCA changes will use ScalarArrayOp to express selectivity on a list of values. So, the expression for the above SQL will look like 1 <= ANY {month_part} AND 3 >= ANY {month_part}, where month_part will be substituted at runtime with different list of values for each of quarterly partitions. We will end up evaluating that expressions 4 times with the following list of values:
      
      Q1: 1 <= ANY {1,2,3} AND 3 >= ANY {1,2,3}
      Q2: 1 <= ANY {4,5,6} AND 3 >= ANY {4,5,6}
      ...
      
      Compare this to the previous approach, where we will end up evaluating 12 different expressions, each time for a single point value:
      
      First constraint of Q1: 1 <= 1 AND 3 >= 1
      Second constraint of Q1: 1 <= 2 AND 3 >= 2
      Third constraint of Q1: 1 <= 3 AND 3 >= 3
      First constraint of Q2: 1 <= 4 AND 3 >= 4
      ...
      
      The ScalarArrayOp depends on a new type of expression PartListRuleExpr that can convert a list rule to an array of values. ORCA specific changes can be found here: https://github.com/greenplum-db/gporca/pull/149
      5cecfcd1
  21. 07 3月, 2017 1 次提交
    • H
      Add function for throwing ereport-like errors from gpopt code. · f425e97c
      Heikki Linnakangas 提交于
      This allows us to have the exact same error message and hint for errors,
      as what the traditional planner produces. That makes testing easier, as
      you don't need to have a different expected output file for ORCA and
      non-ORCA. And allows for more structured errors anyway.
      
      Use the new function for the case of trying to read from a WRITABLE
      external table. There was no test for that in the main test suite
      previously. There was one in the gpfdist suite, but that's not really the
      right place, as that error is caught the same way regardless of the
      protocol. While we're at it, re-word the error message and change the error
      code to follow the Postgres error message style guide.
      f425e97c
  22. 08 2月, 2017 1 次提交
  23. 04 2月, 2017 1 次提交
    • O
      [#138767899] Prune system cols for appendonly partition tables · 8e001fac
      Omer Arap 提交于
      Previously gporca translator was only pruning the non-visible system columns from
      the table descriptor for non-partition `appendonly` tables or if the
      paritition table is marked as `appendonly` at the root level.
      
      If one of the leaf partitions in is marked as `appendonly` but the root
      is not, the system columns still appears in the table descriptor.
      
      This commit fixes the issue by checking if the root table has
      `appendonly` paritions and pruning system columns if it has.
      8e001fac
  24. 24 1月, 2017 1 次提交
    • D
      [#134494357] Added ANYENUM, ANYNONARRAY · c3ad85eb
      Dhanashree Kashid 提交于
      After 8.3 merge, gpdb has new polymorphic types, ANYENUM and ANYNONARRAY.
      
      This fix adds support for ANYENUM and ANYNONARRAY in Translator.
      
      As per postgreSQL, when a function has polymorphic arguments and results;
      in the function call they must have the same actual type.
      For example, a function declared as `f(ANYARRAY) returns ANYENUM`
      will only accept arrays of enum types.
      
      We already have this resolution logic implemented in
      `resolve_polymorphic_argtypes()`.
      
      Refactor the code in `PdrgpmdidResolvePolymorphicTypes()` to use
      `resolve_polymorphic_argtypes()` to deduce the correct data type for
      function argument and return type, based on function call.
      Signed-off-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
      Signed-off-by: NOmer Arap <oarap@pivotal.io>
      c3ad85eb
  25. 20 1月, 2017 1 次提交
  26. 18 1月, 2017 1 次提交
    • D
      [#134494265] Update Translator files to refer 'OpFamily' · a8c6930d
      Dhanashree Kashid 提交于
      With PostgreSQL 8.3, there's a new concept called "operator families".
      An operator class is now part of an operator family, which can contain
      cross-datatype operators that are "compatible" with each other.
      
      ORCA doesn't know anything about that. This commit updates the
      Translator files to refer to OpFamily instead of 'OpClasses'.
      
      ORCA still doesn't take advantage of this, but at least we are using
      operator families in operator classes' stead to make indexes work.
      Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
      a8c6930d
  27. 10 11月, 2016 1 次提交
  28. 02 11月, 2016 1 次提交
  29. 01 11月, 2016 1 次提交
    • K
      Fix ORCA error message and make it same as with Planner.(Closes #1247) · 4eb5db7a
      Karthikeyan Jambu Rajaramn 提交于
      - In ORCA, due to the way exception handled previously we do warning first
      and then later print error referring that message. In this commit, we
      enhanced the exception handling so we just print a single error message.
      - Also, we removed 'PQO unable to generate a plan' or 'Aborting PQO plan
      generation' message and make the error message as close as the planner.
      - Updated error message with filename and line number from where the exception
      is raised.
      4eb5db7a
  30. 20 10月, 2016 1 次提交
    • D
      Explicitly mark abort() as an allowed API call for gpopt · ab281fc5
      Daniel Gustafsson 提交于
      libgpos has a set of banned API calls which needs to be allowed with
      the ALLOW_xxx macro in order for gpopt to compile (and thus run).
      The changes to ereport() brought a need for allowing abort() since
      it now invokes abort when building with --enable-cassert.
      
      This is a temporary fix awaiting the removal of the banning of
      function calls entirely. Pushed even though the CI pipeline failed
      to provide a clean run (for seeminly unrelated reasons) due to the
      absence of this blocking other efforts.
      ab281fc5
  31. 30 6月, 2016 1 次提交
  32. 23 6月, 2016 1 次提交
  33. 22 3月, 2016 1 次提交
    • H
      Remove unnecessary 'need_free' argument from defGetString(). · e520d50c
      Heikki Linnakangas 提交于
      All of the callers are in places where leaking a few bytes of memory to
      the current memory context will do no harm. Either parsing, or processing
      a DDL command, or planning. So let's simplify the callers by removing the
      argument. That makes the code match the upstream again, which makes merging
      easier.
      
      These changes were originally made to reduce the memory consumption when
      doing parse analysis on a heavily partitioned table, but the previous
      commit provided a more whole-sale solution for that, so we don't need to
      nickel-and-dime every allocation anymore.
      e520d50c
  34. 12 2月, 2016 2 次提交
  35. 11 2月, 2016 1 次提交
    • D
      Fix a set of compiler warnings in gpopt · 074446fc
      Daniel Gustafsson 提交于
      Without altering functionality, fix a set of compiler warnings in
      gpopt: Properly return in non-void function, remove non-function
      invocation of variable and use the right formatter for ULLONG when
      printing.
      074446fc
  36. 26 1月, 2016 1 次提交
    • K
      DEFAULT paramters of UDF ported from PostgreSQL 8.4 · 5b2af3cf
      Kuien Liu 提交于
      Functions can be declared with parameters with default values or
      expressions.  The default expressions are used as parameter value
      if the parameter is not explicitly specified in a function call.
      All parameters after a parameter with default value have to be
      parameters with default values as well.
      
      It allows user to invoke a UDF without setting all the parameters.
      Two examples to demo its usage:
      
          CREATE FUNCTION dfunc1(text DEFAULT 'Hello', text DEFAULT 'World')
              RETURNS text AS $$
              SELECT $1 || ', ' || $2;
              $$ LANGUAGE SQL;
          SELECT dfunc1();  -- 'Hello, World'
          SELECT dfunc1('Hi');  -- 'Hi, World'
          SELECT dfunc1('Hi', 'Beijing');  -- 'Hi, Beijing'
      
          CREATE FUNCTION dfunc2(id int4, t timestamp DEFAULT now())
              RETURNS text AS $$
              SELECT 'Time for id:' || $1 || ' is ' || $2;
              $$ LANGUAGE SQL;
          SELECT dfunc2(24);  -- 'Time for id:24 is 2016-01-07 14:38'
      
      NOTE: The default change set is ported from from PostgreSQL 8.4,
          original commits:
          '517ae403'
          '455dffbb'
      5b2af3cf
  37. 23 12月, 2015 1 次提交
  38. 22 12月, 2015 1 次提交
    • Y
      VARIADIC paramters of UDF ported from PostgreSQL. · 4665a8d5
      Yu Yang 提交于
      User could use VARIADIC to specify parameter list when defining UDF
      if they want to use variadic parameters. It is easier for user to write
      only one variadic function instead of same name function with different
      parameters. An example for using variadic:
      create function concat(text, variadic anyarray) returns text as $$
      select array_to_string($2, $1);
      $$ language sql immutable strict;
      
      select concat('%', 1, 2, 3, 4, 5);
      
      NOTE: The variadic change set is ported from upstream of PostgreSQL:
      commit 517ae403
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Thu Dec 18 18:20:35 2008 +0000
      
      Code review for function default parameters patch.  Fix numerous problems as
      per recent discussions.  In passing this also fixes a couple of bugs in
      the previous variadic-parameters patch.
      
      commit 6563e9e2
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Wed Jul 16 16:55:24 2008 +0000
      
      Add a "provariadic" column to pg_proc to eliminate the remarkably expensive
      need to deconstruct proargmodes for each pg_proc entry inspected by
      FuncnameGetCandidates().  Fixes function lookup performance regression
      caused by yesterday's variadic-functions patch.
      
      In passing, make pg_proc.probin be NULL, rather than a dummy value '-',
      in cases where it is not actually used for the particular type of function.
      This should buy back some of the space cost of the extra column.
      
      commit d89737d3
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Wed Jul 16 01:30:23 2008 +0000
      
      Support "variadic" functions, which can accept a variable number of arguments
      so long as all the trailing arguments are of the same (non-array) type.
      The function receives them as a single array argument (which is why they
      have to all be the same type).
      
      It might be useful to extend this facility to aggregates, but this patch
      doesn't do that.
      
      This patch imposes a noticeable slowdown on function lookup --- a follow-on
      patch will fix that by adding a redundant column to pg_proc.
      
      Conflicts:
      	src/backend/gpopt/gpdbwrappers.cpp
      4665a8d5
  39. 17 12月, 2015 1 次提交
    • H
      Track changes to catalogs that contain data cached in the metadata cache. · 7167ac78
      Heikki Linnakangas 提交于
      ORCA uses its own metadata cache to store information about relations,
      operators etc. Currently, we always reset the cache when planning a query,
      unless the optimizer_release_mdcache GUC is turned off, which is slow.
      
      To make it safe to turn optimizer_release_mdcache off, use the catalog
      cache invalidation mechanism to still reset the cache when there are
      changes to the catalogs that affect the metadata cache.
      
      The ORCA-facing interface of this is the same as in the previous attempt:
      A function that returns true/false indicating whether there has been any
      catalog changes whatsoever since last call.
      7167ac78