1. 23 3月, 2019 3 次提交
  2. 22 3月, 2019 5 次提交
    • J
      Fix gpexpand flaky case (#7232) · d0839b23
      Jialun 提交于
      As a kill may be not taken effect at "the database is killed on hosts "
      immediately, so we double check the cluster and stop database if it is
      still running. But if kill takes effect after checking, stop_database
      will raise an exception for gpstop will fail to stop an unstarted cluster.
      To fix this flaky case, we stop database first and check the database
      status later. It will raise an exception only when gpstop failed and
      the cluster is still running.
      d0839b23
    • S
      gprecoverseg: Add --no-progress flag. · eb064718
      Shoaib Lari 提交于
      For some areas of the ICW test framework -- isolation2 in particular --
      the additional data written to stdout by gprecoverseg's progress
      increased the load on the system significantly. (Some tests are
      buffering stdout without bound, for instance.)  Additionally, the
      updates were coming at ten times a second, which is an order of
      magnitude more than the update interval we get from pg_basebackup
      itself.
      
      To help with this, we have have added a --no-progress flag that
      suppresses the output of pg_basebackup.  We have also changed the
      pg_basebackup progress update rate to once per second to minimize I/O.
      
      The impacted regression/isolation2 tests utilizing gprecoverseg have
      also been modified to use the --no-progress flag.
      Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
      Co-authored-by: NJacob Champion <pchampion@pivotal.io>
      eb064718
    • K
      Add gprecoverseg -s to show progress sequentially · f04c206a
      Kalen Krempely 提交于
      When -s is present, show pg_basebackup progress sequentially instead
      of inplace. Useful when writing to a file, or if a tty does not support
      escape sequences. Defaults to showing the progress inplace.
      f04c206a
    • S
      gprecoverseg: Show progress of pg_basebackup on each segment · d41ca162
      Shoaib Lari 提交于
      The gprecoverseg utility runs pg_basebackup in parallel on all segments that are
      being recovered.  In this commit, we are logging the progress of each
      pg_basebackup on its host and displaying them to the user of gprecoverseg.  The
      progress files are deleted upons successful completion of gprecoverseg.
      
      Unit tests have also been added.
      Authored-by: NShoaib Lari <slari@pivotal.io>
      Co-authored-by: NMark Sliva <msliva@pivotal.io>
      Co-authored-by: NJacob Champion <pchampion@pivotal.io>
      Co-authored-by: NEd Espino <edespino@pivotal.io>
      Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
      d41ca162
    • B
      9a2cf8cc
  3. 21 3月, 2019 10 次提交
  4. 20 3月, 2019 3 次提交
  5. 19 3月, 2019 6 次提交
  6. 18 3月, 2019 4 次提交
  7. 16 3月, 2019 6 次提交
  8. 15 3月, 2019 3 次提交
    • J
      Have a queck fix on memory accouting test in gpos job (#7183) · 8454f0d2
      Jinbao Chen 提交于
      The error output in gpos job is very different from other jobs. Add
      ignore first to fix the pipeline. I would find the root cause and
      enable the case again.
      8454f0d2
    • S
      Add more tablespace tests in pg_basebackup (#7097) · fe6f56ad
      Shaoqi Bai 提交于
      It tests heap table and index, temporary table and index created in user tablespace , still exist in  pg_basebackup output.
      fe6f56ad
    • N
      explain: fix 'rows' of partial / replicated tables · 5bd7930d
      Ning Yu 提交于
      A replicated table has a full replica of the data on each segment, so
      the 'rows' in the EXPLAIN output should not be scaled.
      
      A partial table's 'rows' in the EXPLAIN output should be scaled with the
      numsegments of itself.
      
      We used to scale both of above cases with the cluster size in the
      EXPLAIN output, so the 'rows' were incorrectly displayed.  It's only a
      bug in EXPLAIN output, the cost calculation of the plan is not affected.
      5bd7930d