- 29 9月, 2020 1 次提交
-
-
由 Jesse Zhang 提交于
The canonical config file is in src/backend/gpopt/.clang-format (instead of under the non-existent src/backend/gporca), I've created one (instead of two) symlink, for GPOPT headers. Care has been taken to repoint the symlink to the canonical config under gpopt, instead of gpopt as it is under HEAD. This is spiritually a cherry-pick of commit 2f7dd76c. (cherry picked from commit 2f7dd76c)
-
- 20 9月, 2019 1 次提交
-
-
由 Shreedhar Hardikar 提交于
- Fix "missing prototype" warnings - Fix "generalized initializer lists are a C++ extension" warning funcs.cpp:43:1: warning: no previous prototype for function 'DisableXform' [-Wmissing-prototypes] DisableXform(PG_FUNCTION_ARGS) ^ funcs.cpp:76:1: warning: no previous prototype for function 'EnableXform' [-Wmissing-prototypes] EnableXform(PG_FUNCTION_ARGS) ^ funcs.cpp:109:1: warning: no previous prototype for function 'LibraryVersion' [-Wmissing-prototypes] LibraryVersion() ^ funcs.cpp:123:1: warning: no previous prototype for function 'OptVersion' [-Wmissing-prototypes] OptVersion() ^ 4 warnings generated. CTranslatorDXLToScalar.cpp:730:9: warning: generalized initializer lists are a C++11 extension [-Wc++11-extensions] return { .oid_type = inner_type_oid, .type_modifier = type_modifier};
-
- 17 9月, 2019 1 次提交
-
-
由 Heikki Linnakangas 提交于
GPORCA PR https://github.com/greenplum-db/gporca/pull/475 will remove CDXLDatum::IsPassByValue() method and related code, because it's not really needed and bloats DXL files unnecessarily. This commit makes the corresponding changes to the GPORCA translator code.
-
- 01 6月, 2019 1 次提交
-
-
由 Chris Hajas 提交于
The IMemoryPool interface was removed in ORCA to remove an unnecessary abstraction layer and avoid costly casting. Corresponding ORCA commit: e64a2b42 Bumps ORCA version to 3.46.0 Authored-by: NChris Hajas <chajas@pivotal.io>
-
- 30 5月, 2019 1 次提交
-
-
由 Shreedhar Hardikar 提交于
Revert "Revert "Tests for ORCA commit "Refactor RewindabilitySpec to improve correlated subquery handling""" This reverts commit c90e2f0c.
-
- 23 5月, 2019 1 次提交
-
-
由 Shreedhar Hardikar 提交于
This reverts commit 190b6856. It caused a "ERROR: Illegal rescan of motion node" error on 5x. Brings ORCA commit back to 3.43.1.
-
- 21 5月, 2019 1 次提交
-
-
由 Shreedhar Hardikar 提交于
Also bump ORCA version to v3.44.0.
-
- 16 8月, 2018 1 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
As part of moving away from Hungarian notation in the GPORCA codebase, the integration points between GPORCA and GPDB in the translator have been renamed to the new convention used in GPORCA. The libraries currently updated to the new notation in GPORCA are Naucrates and GPOS. The new naming convention is a custom version of common C++ naming conventions. The style guide for this convention can be found in the GPORCA repository. Also bump ORCA version to 2.69.0 Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io> Co-authored-by: NMelanie Plageman <mplageman@pivotal.io> Co-authored-by: NEkta Khanna <ekhanna@pivotal.io> Co-authored-by: NAbhijit Subramanya <asubramanya@pivotal.io> Co-authored-by: NSambitesh Dash <sdash@pivotal.io><Paste> Co-authored-by: NDhanashree Kashid <dkashid@pivotal.io> Co-authored-by: NOmer Arap <oarap@pivotal.io>
-
- 15 2月, 2018 1 次提交
-
-
由 Jesse Zhang 提交于
ORCA has historically ignored type modifiers from databases that support them, noticeably Postgres and Greenplum. This has led to surprises in a few cases: 1. The output description over the wire (for Postgres protocol) will lose the type modifier information, which often meant length. This surprises code that expects a non-default type modifier, e.g. a JDBC driver. 2. The executor in some cases -- notably DML -- expects a precise type modifier. Because ORCA always erases the type modifiers and presents a default, the executor is forced to find that information elsewhere. After this commit, ORCA will be aware of type modifiers in table columns, scalar identifiers, constants, and length-coercion casts. Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io> (cherry picked from commit 2d907526)
-
- 25 1月, 2018 1 次提交
-
-
由 Shreedhar Hardikar 提交于
This information can be easily derived from CDXLColRef member of CDXLScalarIdent. This now mirrors what is done in the ORCA types CScalarIdent and CColRef. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
- 09 1月, 2018 1 次提交
-
-
由 Sambitesh Dash 提交于
Instead of assuming that casts are always binary coercible (and hence that we could get away with just dropping them), translate casts in ORCA plans into either a RelabelType or a FuncExpr. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
- 12 9月, 2017 2 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
With commit 0daa3b5e, we consumed winagg and winstar fields of WindowRef in ORCA. However, we don't have the winagg and winstar fields yet available in WindowRef node. Hence, marking as false. Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
With commit 387c485d winstar and winagg fields were added in WindowRef Node, so this commit adds handling for them in ORCA Translator.
-
- 26 8月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
The winlevelsup field isn't used. The reason it's not needed can be summed up by this comment in PostgreSQL 8.4's transformWindowFunc function: > * Unlike aggregates, only the most closely nested pstate level need be > * considered --- there are no "outer window functions" per SQL spec. Second line of reasoning is that the winlevelsup field was always initialized to 0, and only incremented in the IncrementVarSublevelsUp function. But that function is only used during planning, so winlevelsup was always 0 in the parse and parse analysis stage. However, the field was read only in the parse analysis phase, which means that it was always 0 when it was read. Third line of reasoning is that the regression tests are happy without it, and there was a check in the ORCA translator too, that would've thrown an error if it was ever non-zero. I left the field in place in the struct, to avoid a catalog change, but it is now unsued. WindowRef nodes can be stored in catalogs, as part of views, I believe.
-
- 17 8月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
This allows removing all the code in CTranslatorDXLToPlStmt that tracked the parent of each call. I found the plan node IDs awkward, when I was hacking on CTranslatorDXLToPlStmt. I tried to make a change where a function would construct a child Plan node first, and a Result node on top of that, but only if necessary, depending on the kind of child plan. The parent plan node IDs made it impossible to construct a part of Plan tree like that, in a bottom-up fashion, because you always had to pass the parent's ID when constructing a child node. Now that is possible.
-
- 02 8月, 2017 1 次提交
-
-
由 Haisheng Yuan 提交于
-
- 15 7月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
* Remove PartOidExpr, it's not used in GPDB. The target lists of DML nodes that ORCA generates includes a column for the target partition OID. It can then be referenced by PartOidExprs. ORCA uses these to allow sorting the tuples by partition, before inserting them to the underlying table. That feature is used by HAWQ, where grouping tuples that go to the same output partition is cheaper. Since commit adfad608, which removed the gp_parquet_insert_sort GUC, we don't do that in GPDB, however. GPDB can hold multiple result relations open at the same time, so there is no performance benefit to grouping the tuples first (or at least not enough benefit to counterbalance the cost of a sort). So remove the now unused support for PartOidExpr in the executor. * Bump ORCA version to 2.37 Signed-off-by: NEkta Khanna <ekhanna@pivotal.io> * Removed acceptedLeaf Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
- 03 6月, 2017 1 次提交
-
-
由 Haisheng Yuan 提交于
Static variables when used inside function are initialized only once and stored on static storage area. The original code without static initializes these variables every time the function is called and these variables are stored in stack. Since we don't change the array value, they can be static const.
-
- 25 4月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
ORCA can do some optimizations - partition pruning at least - if it can "see" into the elements of an array in a ScalarArrayOpExpr. For example, if you have a qual like "column IN (1, 2, 3)", and the table is partitioned on column, it can eliminate partitions that don't hold those values. The IN-clause is converted into an ScalarArrayOpExpr, so that is really equivalent to "column = ANY <array>" However, ORCA doesn't know how to extract elements from an array-typed Const, so it can only do that if the array in the ScalarArrayOpExpr is an ArrayExpr. Normally, eval_const_expressions() simplifies any ArrayExprs into Const, if all the elements are constants, but we had disabled that when ORCA was used, to keep the ArrayExprs visible to it. There are a couple of reasons why that was not a very good solution. First, while we refrain from converting an ArrayExpr to an array Const, it doesn't help if the argument was an array Const to begin with. The "x IN (1,2,3)" construct is converted to an ArrayExpr by the parser, but we would miss the opportunity if it's written as "x = ANY ('{1,2,3}'::int[])" instead. Secondly, by not simplifying the ArrayExpr, we miss the opportunity to simplify the expression further. For example, if you have a qual like "1 IN (1,2)", we can evaluate that completely at plan time to 'true', but we would not do that with ORCA because the ArrayExpr was not simplified. To be able to also optimize those cases, and to slighty reduce our diff vs upstream in clauses.c, always simplify ArrayExprs to Consts, when possible. To compensate, so that ORCA still sees ArrayExprs rather than array Consts (in those cases where it matters), when a ScalarArrayOpExpr is handed over to ORCA, we check if the argument array is a Const, and convert it (back) to an ArrayExpr if it is. Signed-off-by: NJemish Patel <jpatel@pivotal.io>
-
- 12 4月, 2017 1 次提交
-
-
由 Venkatesh Raghavan 提交于
-
- 01 4月, 2017 1 次提交
-
-
由 foyzur 提交于
GPDB supports range and list partitions. Range partitions are represented as a set of rules. Each rule defines the boundaries of a part. E.g., a rule might say that a part contains all values between (0, 5], where left bound is 0 exclusive, but the right bound is 5, inclusive. List partitions are defined by a list of values that the part will contain. ORCA uses the above rule definition to generate expressions that determine which partitions need to be scanned. These expressions are of the following types: 1. Equality predicate as in PartitionSelectorState->levelEqExpressions: If we have a simple equality on partitioning key (e.g., part_key = 1). 2. General predicate as in PartitionSelectorState->levelExpressions: If we need more complex composition, including non-equality such as part_key > 1. Note: We also have residual predicate, which the optimizer currently doesn't use. We are planning to remove this dead code soon. Prior to this PR, ORCA was treating both range and list partitions as range partitions. This meant that each list part will be converted to a set of list values and each of these values will become a single point range partition. E.g., consider the DDL: ```sql CREATE TABLE DATE_PARTS (id int, year int, month int, day int, region text) DISTRIBUTED BY (id) PARTITION BY RANGE (year) SUBPARTITION BY LIST (month) SUBPARTITION TEMPLATE ( SUBPARTITION Q1 VALUES (1, 2, 3), SUBPARTITION Q2 VALUES (4 ,5 ,6), SUBPARTITION Q3 VALUES (7, 8, 9), SUBPARTITION Q4 VALUES (10, 11, 12), DEFAULT SUBPARTITION other_months ) ( START (2002) END (2012) EVERY (1), DEFAULT PARTITION outlying_years ); ``` Here we partition the months as list partition using quarters. So, each of the list part will contain three months. Now consider a query on this table: ```sql select * from DATE_PARTS where month between 1 and 3; ``` Prior to this ORCA generated plan would consider each value of the Q1 as a separate range part with just one point range. I.e., we will have 3 virtual parts to evaluate for just one Q1: [1], [2], [3]. This approach is inefficient. The problem is further exacerbated when we have multi-level partitioning. Consider the list part of the above example. We have only 4 rules for 4 different quarters, but we will have 12 different virtual rule (aka constraints). For each such constraint, we will then evaluate the entire subtree of partitions. After this PR, we no longer decompose rules into constraints for list parts and then derive single point virtual range partitions based on those constraints. Rather, the new ORCA changes will use ScalarArrayOp to express selectivity on a list of values. So, the expression for the above SQL will look like 1 <= ANY {month_part} AND 3 >= ANY {month_part}, where month_part will be substituted at runtime with different list of values for each of quarterly partitions. We will end up evaluating that expressions 4 times with the following list of values: Q1: 1 <= ANY {1,2,3} AND 3 >= ANY {1,2,3} Q2: 1 <= ANY {4,5,6} AND 3 >= ANY {4,5,6} ... Compare this to the previous approach, where we will end up evaluating 12 different expressions, each time for a single point value: First constraint of Q1: 1 <= 1 AND 3 >= 1 Second constraint of Q1: 1 <= 2 AND 3 >= 2 Third constraint of Q1: 1 <= 3 AND 3 >= 3 First constraint of Q2: 1 <= 4 AND 3 >= 4 ... The ScalarArrayOp depends on a new type of expression PartListRuleExpr that can convert a list rule to an array of values. ORCA specific changes can be found here: https://github.com/greenplum-db/gporca/pull/149
-
- 28 2月, 2017 1 次提交
-
-
由 Daniel Gustafsson 提交于
Fix various typos that seemed common.
-
- 21 1月, 2017 1 次提交
-
-
由 Ekta Khanna 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
- 20 1月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
These warnings are not enabled by default, but you'll see them with -Wall.
-
- 15 6月, 2016 1 次提交
-
-
由 Heikki Linnakangas 提交于
This is mostly copy-pasted from CoerceToDomain. Note: This requires an up-to-date version of ORCA to compile, older versions of the ORCA library itself don't know about CoerceViaIO nodes either.
-
- 18 5月, 2016 1 次提交
-
-
- 11 12月, 2015 1 次提交
-
-
由 Entong Shen 提交于
This commit eliminates the global new/delete overrides that were causing compatibility problems (the Allocators.(h/cpp/inl) files have been completely removed). The GPOS `New()` macro is retained and works the same way, but has been renamed `GPOS_NEW()` to avoid confusion and possible name collisions. `GPOS_NEW()` works only for allocating singleton objects. For allocating arrays, `GPOS_NEW_ARRAY()` is provided. Because we no longer override the global delete, objects/arrays allocated by `GPOS_NEW()` and `GPOS_NEW_ARRAY()` must now be deleted by the new functions `GPOS_DELETE()` and `GPOS_DELETE_ARRAY()` respectively. All code in GPOS has been retrofitted for these changes, but Orca and other code that depends on GPOS should also be changed. Note that `GPOS_NEW()` and `GPOS_NEW_ARRAY()` should both be exception-safe and not leak memory when a constructor throws. Closes #166
-
- 24 11月, 2015 1 次提交
-
-
由 Venkatesh Raghavan 提交于
-
- 28 10月, 2015 1 次提交
-
-