- 29 9月, 2020 2 次提交
-
-
由 Jesse Zhang 提交于
The canonical config file is in src/backend/gpopt/.clang-format (instead of under the non-existent src/backend/gporca), I've created one (instead of two) symlink, for GPOPT headers. Care has been taken to repoint the symlink to the canonical config under gpopt, instead of gpopt as it is under HEAD. This is spiritually a cherry-pick of commit 2f7dd76c. (cherry picked from commit 2f7dd76c)
-
由 Shreedhar Hardikar 提交于
In a previous ORCA version (3.311) we added code to fall back gracefully when a subquery select list contains a single outer ref that is not part of an expression, such as in select * from foo where a is null or a = (select foo.b from bar) This commit adds a fix that allows us to handle such queries in ORCA by adding a project in the translator that will echo the outer ref from within the subquery, and using that projected value in the select list of the subquery. This ensures that we use a NULL value for the scalar subquery in the expression for the outer ref when the subquery returns no rows. Also note that this is still skipped for grouping cols in the target list. This was done to avoid regression for certain queries, such as: select * from A where not exists (select sum(C.i) from C where C.i = A.i group by a.i); ORCA is currently unable to decorrelate sub-queries that contain project nodes, So, a `SELECT 1` in the subquery would also cause this regression. In the above query, the parser adds `a.i` to the target list of the subquery, that would get an echo projection (as described above), and thus would prevent decorelation by ORCA. For this reason, we decided to maintain existing behavior until ORCA is able to handle projections in subqueries better. Also add ICG tests. Co-authored-by: NHans Zeller <hzeller@pivotal.io> Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
- 28 9月, 2019 1 次提交
-
-
由 Sambitesh Dash 提交于
optimizer_enable_dml is set to true by default. When set to false, ORCA will fall back to planner for all DML queries.
-
- 13 6月, 2019 1 次提交
-
-
由 Jinbao Chen 提交于
In ‘copy (select statement) to file’, we generate a query plan and set its dest receivor to copy_dest_receive. And run the dest receivor on QD. In 'copy (select statement) to file on segment', we modify the query plan, delete gather mothon, and let dest receivor run on QE. Change 'isCtas' in Query to 'parentStmtType' to be able to mark the upper utility statement type. Add a CopyIntoClause node to store copy informations. Add copyIntoClause to PlannedStmt. In postgres, we don't need to make a different query plan for the query in the utility stament. But in greenplum, we need to. So we use a field to indicate whether the query is contained in utitily statemnt, and the type of utitily statemnt. Actually the behavior of 'copy (select statement) to file on segment' is very similar to 'SELECT ... INTO ...' and 'CREATE TABLE ... AS SELECT ...'. We use distribution policy inherent in the query result as the final data distribution policy. If not, we use the first clomn in target list as the key, and redistribute. The only difference is that we used 'copy_dest_receiver' instead of 'intorel_dest_receiver' The commit backport from "bad6cebc" Co-authored-by: NWen Lin <wlin@pivotal.io>
-
- 07 6月, 2019 1 次提交
-
-
由 Jesse Zhang 提交于
Commit 679c10b3 added a required field to the 'Query' Node. This breaks backwards compatibility of the catalog. In Postgres (and Greenplum), we store views in the catalog as serialized, parsed queries. Introducing a new required field invalidates any previously serialized query. Specifically this means views in 5.x created prior to 679c10b3 cannot be queried after that commit. We found out because we tried running `gpstart` from HEAD of 5X_STABLE on a cluster created by 5.19.0. This reverts commit 679c10b3. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
-
- 03 6月, 2019 1 次提交
-
-
由 Jinbao Chen 提交于
In ‘copy (select statement) to file’, we generate a query plan and set its dest receivor to copy_dest_receive. And run the dest receivor on QD. In 'copy (select statement) to file on segment', we modify the query plan, delete gather mothon, and let dest receivor run on QE. Change 'isCtas' in Query to 'parentStmtType' to be able to mark the upper utility statement type. Add a CopyIntoClause node to store copy informations. Add copyIntoClause to PlannedStmt. In postgres, we don't need to make a different query plan for the query in the utility stament. But in greenplum, we need to. So we use a field to indicate whether the query is contained in utitily statemnt, and the type of utitily statemnt. Actually the behavior of 'copy (select statement) to file on segment' is very similar to 'SELECT ... INTO ...' and 'CREATE TABLE ... AS SELECT ...'. We use distribution policy inherent in the query result as the final data distribution policy. If not, we use the first clomn in target list as the key, and redistribute. The only difference is that we used 'copy_dest_receiver' instead of 'intorel_dest_receiver' The commit backport from "bad6cebc" Co-authored-by: NWen Lin <wlin@pivotal.io>
-
- 01 6月, 2019 1 次提交
-
-
由 Chris Hajas 提交于
The IMemoryPool interface was removed in ORCA to remove an unnecessary abstraction layer and avoid costly casting. Corresponding ORCA commit: e64a2b42 Bumps ORCA version to 3.46.0 Authored-by: NChris Hajas <chajas@pivotal.io>
-
- 16 8月, 2018 1 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
As part of moving away from Hungarian notation in the GPORCA codebase, the integration points between GPORCA and GPDB in the translator have been renamed to the new convention used in GPORCA. The libraries currently updated to the new notation in GPORCA are Naucrates and GPOS. The new naming convention is a custom version of common C++ naming conventions. The style guide for this convention can be found in the GPORCA repository. Also bump ORCA version to 2.69.0 Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io> Co-authored-by: NMelanie Plageman <mplageman@pivotal.io> Co-authored-by: NEkta Khanna <ekhanna@pivotal.io> Co-authored-by: NAbhijit Subramanya <asubramanya@pivotal.io> Co-authored-by: NSambitesh Dash <sdash@pivotal.io><Paste> Co-authored-by: NDhanashree Kashid <dkashid@pivotal.io> Co-authored-by: NOmer Arap <oarap@pivotal.io>
-
- 07 8月, 2018 1 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
The commit a3f7f4d7 introduced a fall back if the query contained ONLY in the FROM clause for relations. However, it should exclude external tables. For external tables RangeTblEntry::inh is set to false always, so the commit causes ORCA to fall back for all external table queries. This commit fixes the issue by excluding external tables from the check of ONLY clause in CTranslatorQueryToDXL::PdxlnFromRelation(), and adds relevant test cases.
-
- 17 5月, 2018 1 次提交
-
-
由 Jesse Zhang 提交于
Fixes greenplum-db/gporca#358
-
- 23 3月, 2018 1 次提交
-
-
由 Sambitesh Dash 提交于
Signed-off-by: NSambitesh Dash <sdash@pivotal.io> Signed-off-by: NJesse Zhang <sbjesse@gmail.com> (cherry picked from commit a3f7f4d7)
-
- 15 2月, 2018 1 次提交
-
-
由 Jesse Zhang 提交于
ORCA has historically ignored type modifiers from databases that support them, noticeably Postgres and Greenplum. This has led to surprises in a few cases: 1. The output description over the wire (for Postgres protocol) will lose the type modifier information, which often meant length. This surprises code that expects a non-default type modifier, e.g. a JDBC driver. 2. The executor in some cases -- notably DML -- expects a precise type modifier. Because ORCA always erases the type modifiers and presents a default, the executor is forced to find that information elsewhere. After this commit, ORCA will be aware of type modifiers in table columns, scalar identifiers, constants, and length-coercion casts. Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io> (cherry picked from commit 2d907526)
-
- 25 1月, 2018 1 次提交
-
-
由 Shreedhar Hardikar 提交于
This information can be easily derived from CDXLColRef member of CDXLScalarIdent. This now mirrors what is done in the ORCA types CScalarIdent and CColRef. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
- 07 9月, 2017 1 次提交
-
-
由 Haisheng Yuan 提交于
Planner generates plan that doesn't insert any motion between WorkTableScan and its corresponding RecursiveUnion, because currently in GPDB motions are not rescannable. For example, a MPP plan for recursive CTE query may look like: ``` Gather Motion 3:1 -> Recursive Union -> Seq Scan on department Filter: name = 'A'::text -> Nested Loop Join Filter: d.parent_department = sd.id -> WorkTable Scan on subdepartment sd -> Materialize -> Broadcast Motion 3:3 -> Seq Scan on department d ``` For the current solution, the WorkTableScan is always put on the outer side of the top most Join (the recursive part of RecusiveUnion), so that we can safely rescan the inner child of join without worrying about the materialization of a potential underlying motion. This is a heuristic based plan, not a cost based plan. Ideally, the WorkTableScan can be placed on either side of the join with any depth, and the plan should be chosen based on the cost of the recursive plan and the number of recursions. But we will leave it for later work. Note: The hash join is temporarily disabled for plan generation of recursive part, because if the hash table spills, the batch file is going to be removed as it executes. We have a following story to enable spilled hash table to be rescannable. See discussion at gpdb-dev mailing list: https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/s_SoXKlwd6I
-
- 22 8月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
This is potentially a tiny bit faster, if the coercion can be performed just once at parse/plan time, rather than on every row. This fixes some of the bogus error checks and inconsistencies in handling the ROWS expressions. For example, before, if you passed a string constant as the ROWS expression, you got an error, but if you passed a more complicated expression, that returned a string, the string was cast to an integer at runtime. And those casts evaded the plan-time checks for negative values. Also, move the checks for negative ROWS/RANGE value from the parser to the beginning of execution, even in the cases where the value is a constant, or a stable expression that only needs to be executed once. We were missing the checks in ORCA, so this fixes the behavior with ORCA for such queries.
-
- 09 8月, 2017 1 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
In ORCA, we donot process interrupts during planning stage, however if there are elog/ereport (which further calls errfinish) statements to print additional messages we prematurely exit out the planning stage without cleaning up the memory pools leading to inconsistent memory pool state. This results in crashes for the subsequent queries. This commit fixes the issue by handling interrupts while printing messages using elog/ereport in ORCA. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
- 19 7月, 2017 1 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
This commit introduces a new operator for ValuesScan, earlier we generated `UNION ALL` for cases where VALUES lists passed are all constants, but now a new Operator CLogicalConstTable with an array of const tuples will be generated Once the plan is generated by ORCA, it will be translated to valuesscan node in GPDB. This enhancement helps significantly in improving the total run time for the queries involving values scan in ORCA with const values. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
- 03 6月, 2017 1 次提交
-
-
由 Haisheng Yuan 提交于
Static variables when used inside function are initialized only once and stored on static storage area. The original code without static initializes these variables every time the function is called and these variables are stored in stack. Since we don't change the array value, they can be static const.
-
- 01 6月, 2017 1 次提交
-
-
由 Venkatesh Raghavan 提交于
-
- 15 5月, 2017 1 次提交
-
-
由 Venkatesh Raghavan 提交于
* Enable analyzing root partitions * Ensure that the name of the guc is clear * Remove double negation (where possible) * Update comments * Co-locate gucs that have similar purpose * Remove dead gucs * Classify them correctly so that they are no longer hidden
-
- 20 1月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
It gives compiler warnings: /home/heikki/gpdb/orca-install/include/gpos/common/CDynamicPtrArray.inl:382:3: warning: nonnull argument ‘this’ compared to NULL [-Wnonnull-compare] if (NULL == this) ^~
-
由 Heikki Linnakangas 提交于
These warnings are not enabled by default, but you'll see them with -Wall.
-
- 19 12月, 2016 1 次提交
-
-
由 Daniel Gustafsson 提交于
The different kinds of NOTICE messages regarding table distribution were using a mix of upper and lower case for 'DISTRIBUTED BY'. Make them consistent by using upper case for all messages and update the test files, and atmsort regexes, to match.
-
- 10 11月, 2016 1 次提交
-
-
由 Heikki Linnakangas 提交于
A bunch of functions and classes that are not used anywhere.
-
- 02 11月, 2016 1 次提交
-
-
由 Haisheng Yuan 提交于
gporca has a set of banned API calls which needs to be allowed with the ALLOW_xxx macro in order for gpopt to compile. But it should be the library caller(GPDB/Orca)'s resposibility to take care of the function call. see discussions on greenplum-db/gpdb#1136 and https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/Mcw6JPav6h4
-
- 20 10月, 2016 1 次提交
-
-
由 Daniel Gustafsson 提交于
libgpos has a set of banned API calls which needs to be allowed with the ALLOW_xxx macro in order for gpopt to compile (and thus run). The changes to ereport() brought a need for allowing abort() since it now invokes abort when building with --enable-cassert. This is a temporary fix awaiting the removal of the banning of function calls entirely. Pushed even though the CI pipeline failed to provide a clean run (for seeminly unrelated reasons) due to the absence of this blocking other efforts.
-
- 16 7月, 2016 1 次提交
-
-
- 23 6月, 2016 1 次提交
-
-
- 22 6月, 2016 1 次提交
-
-
- 19 5月, 2016 1 次提交
-
-
- 10 5月, 2016 1 次提交
-
-
- 22 3月, 2016 1 次提交
-
-
由 Heikki Linnakangas 提交于
All of the callers are in places where leaking a few bytes of memory to the current memory context will do no harm. Either parsing, or processing a DDL command, or planning. So let's simplify the callers by removing the argument. That makes the code match the upstream again, which makes merging easier. These changes were originally made to reduce the memory consumption when doing parse analysis on a heavily partitioned table, but the previous commit provided a more whole-sale solution for that, so we don't need to nickel-and-dime every allocation anymore.
-
- 30 12月, 2015 1 次提交
-
-
由 Heikki Linnakangas 提交于
* Add 'const' to arguments of some functions. While we're at it, remove the duplicate extern declaration of FaultInjector_InjectFaultIfSet from gpdbdefs.h. * pstrdup() constants passed to makeString(). I think the lack of copy was harmless, but makeString() explicitly says that the caller should make a copy, and all other callers seem to obey that, so better safe than sorry.
-
- 11 12月, 2015 1 次提交
-
-
由 Entong Shen 提交于
This commit eliminates the global new/delete overrides that were causing compatibility problems (the Allocators.(h/cpp/inl) files have been completely removed). The GPOS `New()` macro is retained and works the same way, but has been renamed `GPOS_NEW()` to avoid confusion and possible name collisions. `GPOS_NEW()` works only for allocating singleton objects. For allocating arrays, `GPOS_NEW_ARRAY()` is provided. Because we no longer override the global delete, objects/arrays allocated by `GPOS_NEW()` and `GPOS_NEW_ARRAY()` must now be deleted by the new functions `GPOS_DELETE()` and `GPOS_DELETE_ARRAY()` respectively. All code in GPOS has been retrofitted for these changes, but Orca and other code that depends on GPOS should also be changed. Note that `GPOS_NEW()` and `GPOS_NEW_ARRAY()` should both be exception-safe and not leak memory when a constructor throws. Closes #166
-
- 24 11月, 2015 1 次提交
-
-
由 Venkatesh Raghavan 提交于
-
- 28 10月, 2015 1 次提交
-
-