- 16 1月, 2019 15 次提交
-
-
由 Hubert Zhang 提交于
This reverts commit b95059a8.
-
由 Hubert Zhang 提交于
This reverts commit e87fdd1a.
-
由 Hubert Zhang 提交于
This reverts commit 1045e4e7.
-
由 Hubert Zhang 提交于
Diskquota extension needs two kinds of hooks: 1. hooks to detect active tables when tables are being modified. 2. hooks to cancel a query whose quota limit is reached. These two kinds of hooks are described in detail in Wiki: https://github.com/greenplum-db/gpdb/wiki/Greenplum-Diskquota-Design#design-of-diskquota They are corresponding to two components: Quota Enforcement Operator and Quota Change Detector. Co-authored-by: NHaozhou Wang <hawang@pivotal.io> Co-authored-by: NHao Wu <gfphoenix78@gmail.com>
-
由 Chuck Litzell 提交于
* Docs - update docs to note that system columns are unavailable in queries on replicated tables. * Edits from reviewers
-
由 Chuck Litzell 提交于
* docs - replicated tables don't support updatable cursors * Revert change that DECLARE for UPDATE not supported with rep tables
-
由 David Yozie 提交于
-
由 David Yozie 提交于
-
由 Alexandra Wang 提交于
To "incrementally" recover old primary as a mirror at a later time via pg_rewind, all xlog must be preserved from point of divergence. Hence, replication slot must be created at promote time. So, this commit adds logic on FTS promote message to create physical replication slot and also sets the restart_lsn of the slot to start preserving the xlog right away. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Andres Freund 提交于
When creating a physical slot it's often useful to immediately reserve the current WAL position instead of only doing after the first feedback message arrives. That e.g. allows slots to guarantee that all the WAL for a base backup will be available afterwards. Logical slots already have to reserve WAL during creation, so generalize that logic into being usable for both physical and logical slots. Catversion bump because of the new parameter. Author: Gurjeet Singh Reviewed-By: Andres Freund Discussion: CABwTF4Wh_dBCzTU=49pFXR6coR4NW1ynb+vBqT+Po=7fuq5iCw@mail.gmail.com
-
由 Alexandra Wang 提交于
pg_rewind --slot is mutual exclusive with --source-pgdata Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Alexandra Wang 提交于
Properly cleanup after replication slot behave test Issue is that gpstart will implicitly reblance the cluster when syned segment pairs are not in their preferred roles. But this functionality is broken with WAL replication. For more info: https://github.com/greenplum-db/gpdb/pull/6659Co-authored-by: NDavid Kimura <dkimura@pivotal.io> Co-authored-by: NAdam Berlin <aberlin@pivotal.io>
-
由 David Kimura 提交于
Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 David Kimura 提交于
Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 Andres Freund 提交于
Move some more code to manage replication connection command to streamutil.c. A later patch will introduce replication slot via pg_receivexlog and this avoid duplicating relevant code between pg_receivexlog and pg_recvlogical. Author: Michael Paquier, with some editing by me.
-
- 15 1月, 2019 13 次提交
-
-
由 Wang Hao 提交于
Some plan node types such as ModifyTable or MergeAppend is not covered in assign_plannode_id(), leads children nodes of them are not assigned with proper plan_node_id. The plan_node_id is required by gpmon and instrument for monitoring purpose, without proper plan_node_id assigned, the consistency of monitoring data will be broken. This commit refactor assign_plannode_id() to use plan_tree_walker. As a result, ModifyTable, MergeAppend and potentially Sequence are covered. Another advantage of using plan_tree_walker is when new types are introduced, we don't need to take care of assign_plannode_id anymore, plan_tree_walker should do that. Fixes https://github.com/greenplum-db/gpdb/issues/5247Reviewed-by: NNing Yu <nyu@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Heikki Linnakangas 提交于
The PostgreSQL 9.2 merge added a setsid() call to postmaster.c, to work around issues on killing the regression tests. But that introduced problems of its own. If you started postmaster directly from the command line, with something like "postgres -D datadir", it would fail with: ERROR: setsid() failed: Operation not permitted (postmaster.c:1025) Another symptom was that I was not able to kill the server with CTRL-C, in the little bash script I use to launch the server. My bash script simply launches all the servers with: #!/bin/bash postmaster -D data-seg0 & postmaster -D data-seg1 & postmaster -D data-seg2 & postmaster -D data-master Commit message from the change on the 9.2 merge branch that introduced the setsid() call: commit ef0cb844c87d8b9769b98a20ca66ecda00620604 Author: Paul Guo <paulguo@gmail.com> Date: Fri Jul 27 18:11:31 2018 +0800 Set up a new session group in postmaster. Without this patch, pressing ctrl+c to terminate regression tests could always cause master processes down if master is identical as client. This issue 100% happens after running into tests in greenplum_schedule. It is because the first test case instr_in_shmem_setup in greenplum_schedule restarts the cluster, however without this patch, when we press ctrl+c to terminate testing, SIGINT will be delivered to postmaster process also. This issue does not happen on gpdb master at this moment due to the existence of pmdaemonize() which was removed in PG 9.2, and since the logic on PG seems to be never changed since 9.2 it happens on PG master also. Not sure how PG upstream thinks about this, but I'm checking in this patch at first since this really affect development. This was really a problem in PostgreSQL, too, but no one had noticed or complained about it. After discussion on pgsql-hackers, it was fixed by this upstream commit: commit bb24439c Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> Date: Mon Jan 14 14:50:58 2019 +0200 Detach postmaster process from pg_ctl's session at server startup. pg_ctl is supposed to daemonize the postmaster process, so that it's not affected by signals to the launching process group. Before this patch, if you had a shell script that used "pg_ctl start", and you interrupted the shell script after postmaster had been launched, postmaster was also killed. To fix, call setsid() after forking the postmaster process. Long time ago, we had a 'silent_mode' option, which daemonized the postmaster process by calling setsid(), but that was removed back in 2011 (commit f7ea6bea). We discussed bringing that back in some form, but pg_ctl is the documented way of launching postmaster to the background, so putting the setsid() call in pg_ctl itself seems appropriate. Just putting postmaster in a separate session would change the behavior when you interrupt "pg_ctl -w start", e.g. with CTRL-C, while it's waiting for postmaster to start. The historical behavior has been that interrupting pg_ctl aborts the server launch, which is handy if the server is stuck in recovery, for example, and won't fully start up. To keep that behavior, install a signal handler in pg_ctl, to explicitly kill postmaster, if pg_ctl is interrupted while it's waiting for the server to start up. This isn't 100% watertight, there is a small window after forking the postmaster process, where the signal handler doesn't know the postmaster's PID yet, but seems good enough. Arguably this is a long-standing bug, but I refrained from back-batching, out of fear of breaking someone's scripts that depended on the old behavior. Reviewed by Tom Lane. Report and original patch by Paul Guo, with feedback from Michael Paquier. Discussion: https://www.postgresql.org/message-id/CAEET0ZH5Bf7dhZB3mYy8zZQttJrdZg_0Wwaj0o1PuuBny1JkEw%40mail.gmail.com This commit reverts the new setsid() call that was added during the 9.2 merge, and backports that upstream fix, instead.
-
由 Huiliang.liu 提交于
The AIX server in not available, so we remove the test job to release candidate. This is temp change until the server is available.
-
由 Yandong Yao 提交于
-
由 Bhuvnesh Chaudhary 提交于
-
由 Bhuvnesh Chaudhary 提交于
-
由 Heikki Linnakangas 提交于
Copy-pasto, spotted while reading the code. I don't know what consequences the wrong owner type would have.
-
由 Ashwin Agrawal 提交于
These tests were run in separate job before as needed cluster to be created for them with no mirrors. These could run only without mirrors due to limitation of allowing only one walsender-walreceiver connection (max_walsender=1). But now that restriction no more exists, can run these tests similar to any other tests as part of regular ICW even if cluster is created with mirror.
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Karen Huddleston 提交于
Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io> Co-authored-by: NBradford Boyle <bboyle@pivotal.io>
-
由 Mel Kiyama 提交于
* docs - migrate using gpcopy- remove same number of source, dest. hosts restriction. * docs - migrate using gpcopy- updates based on review comments.
-
由 Karen Huddleston 提交于
The concourse container for terraform tests is only used to ssh to the terraform cluster so we can use a lighter weight image. Note: regression_tests_gphdfs_mapr_centos does not use the ccp image because it runs tests and GPDB from the Concourse container, and uses a terraform host to run hadoop/mapr Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io> Co-authored-by: NDavid Sharp <dsharp@pivotal.io>
-
- 14 1月, 2019 4 次提交
-
-
由 Heikki Linnakangas 提交于
QE processes largely don't care about client_encoding, because query results are sent through the interconnect, except for a few internal commands, and the query text is presumed to already be in the database encoding, in QD->QE messages. But there were a couple of cases where it mattered. Error messages generated in QEs were being converted to client_encoding, but QD assumed that they were in server encoding. Now that the QEs don't know the user's client_encoding, COPY TO needs changes. In COPY TO, the QEs are responsible for forming the rows in the final cilent_encoding, so the QD now needs to explicitly use the COPY's ENCODING option, when it dispatches the COPY to QEs. The COPY TO handling wasn't quite right, even before this patch. It showed up as regression failure in the src/test/mb/mbregress.sh 'sjis' test. When client_encoding was set with the PGCLIENTENCODING, however, it wasn't set correctly in the QEs, which showed up as incorrectly encoded COPY output. Now that we always set it to match the database encoding in QEs, that's moot. While we're at it, change the mbregress test so that it's not sensitive to row orderings. And make atmsort.pm more lenient, to recognize "COPY <tablename> TO STDOUT", even when the tablename contains non-ascii characters. These changes were needed to make the src/test/mb/ tests pass cleanly. Fixes https://github.com/greenplum-db/gpdb/issues/5241. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/WPmHXuU9T94/gvpNOE73FwAJReviewed-by: NPengzhou Tang <ptang@pivotal.io>
-
由 Richard Guo 提交于
Previously, we did not consider LASJ when calculating the minimum sets of Relids required on each side of the join to form the outer join. As a result, the min_lefthand/min_righthand for LASJ was not correct. One of the consequences is that join_is_legal() works incorrectly for some queries, such as select * from a where a.i not in (select b.i from b left join c on b.i = c.i); For this query, join_is_legal() treats the join of Rel B and C as being illegal, because the min_righthand of the LASJ includes only B, rather than B and C. That would make us fail to find any legal join order and end up with "ERROR: failed to build any 2-way joins". This patch revises min_lefthand/min_righthand in SpecialJoinInfo for LASJ. It also removes the FIXME in join_is_legal(), which is an ugly workaround to make the query above work. Reviewed-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Shaoqi Bai 提交于
-
由 Shaoqi Bai 提交于
When we ignore the nextOid counter in an ONLINE checkpoint, can also remove nextRelfilenode, even if there is a collision about same relfilenode, the logic around calling GpCheckRelFileCollision, would choose another different relfilenode, should be no harm
-
- 12 1月, 2019 5 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
This reverts commit dbece3da. Performance regression observed for tpcds queries due to cardinality misestimation. Impacted TPCDS queries 174, 111 and 104. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Kalen Krempely 提交于
gpstart calls gpstop, so honor the parallel parameter by passing it along to gpstop. Authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Alexandra Wang 提交于
Relkind 'm' stands for materialized view now. This addressed a GPDB_93_MERGE_FIXME. Co-authored-by: NAlexandra Wang <lewang@pivotal.io> Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Chuck Litzell 提交于
* docs - pl/r add a section about creating plr_modules as replicated table * Use https URLs and specify default schema in create table statement.
-
- 11 1月, 2019 3 次提交
-
-
由 Heikki Linnakangas 提交于
* Add missing planstate_walker() support for it * Set memory account id correctly While we're at it, change the assertion in GetMotionState() into an elog. If we hit bugs like this in the future, better to handle it gracefully than crash the whole server. Fixes https://github.com/greenplum-db/gpdb/issues/6668Reviewed-by: NNing Yu <nyu@pivotal.io>
-
由 Heikki Linnakangas 提交于
When we assign the AO segments to insert to, we do that for all inherited tables that we might insert to, not just partitions. Otherwise, the insertion fails with: ERROR: append-only table "child" file segment "-1" entry does not exist or you get an assertion failure if assertions are enabled. Fixes https://github.com/greenplum-db/gpdb/issues/6068. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Heikki Linnakangas 提交于
We cleared the local 'useHeapMultiInsert' variable, as soon as we saw at least one AO partition, even if we had already buffered tuples for a heap partition earlier. As a result, we didn't flush the multi-insert buffer at the end of the COPY. Fixes https://github.com/greenplum-db/gpdb/issues/6678Reviewed-by: NAdam Lee <ali@pivotal.io>
-