1. 18 2月, 2020 1 次提交
  2. 06 1月, 2020 1 次提交
  3. 31 12月, 2019 5 次提交
  4. 27 12月, 2019 7 次提交
  5. 26 12月, 2019 1 次提交
  6. 25 12月, 2019 1 次提交
  7. 24 12月, 2019 1 次提交
    • K
      perf/x86: Make perf callchains work without CONFIG_FRAME_POINTER · 1d3d91ad
      Kairui Song 提交于
      commit d15d356887e770c5f2dcf963b52c7cb510c9e42d upstream.
      
      Currently perf callchain doesn't work well with ORC unwinder
      when sampling from trace point. We'll get useless in kernel callchain
      like this:
      
      perf  6429 [000]    22.498450:             kmem:mm_page_alloc: page=0x176a17 pfn=1534487 order=0 migratetype=0 gfp_flags=GFP_KERNEL
          ffffffffbe23e32e __alloc_pages_nodemask+0x22e (/lib/modules/5.1.0-rc3+/build/vmlinux)
      	7efdf7f7d3e8 __poll+0x18 (/usr/lib64/libc-2.28.so)
      	5651468729c1 [unknown] (/usr/bin/perf)
      	5651467ee82a main+0x69a (/usr/bin/perf)
      	7efdf7eaf413 __libc_start_main+0xf3 (/usr/lib64/libc-2.28.so)
          5541f689495641d7 [unknown] ([unknown])
      
      The root cause is that, for trace point events, it doesn't provide a
      real snapshot of the hardware registers. Instead perf tries to get
      required caller's registers and compose a fake register snapshot
      which suppose to contain enough information for start a unwinding.
      However without CONFIG_FRAME_POINTER, if failed to get caller's BP as the
      frame pointer, so current frame pointer is returned instead. We get
      a invalid register combination which confuse the unwinder, and end the
      stacktrace early.
      
      So in such case just don't try dump BP, and let the unwinder start
      directly when the register is not a real snapshot. Use SP
      as the skip mark, unwinder will skip all the frames until it meet
      the frame of the trace point caller.
      
      Tested with frame pointer unwinder and ORC unwinder, this makes perf
      callchain get the full kernel space stacktrace again like this:
      
      perf  6503 [000]  1567.570191:             kmem:mm_page_alloc: page=0x16c904 pfn=1493252 order=0 migratetype=0 gfp_flags=GFP_KERNEL
          ffffffffb523e2ae __alloc_pages_nodemask+0x22e (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb52383bd __get_free_pages+0xd (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb52fd28a __pollwait+0x8a (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb521426f perf_poll+0x2f (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb52fe3e2 do_sys_poll+0x252 (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb52ff027 __x64_sys_poll+0x37 (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb500418b do_syscall_64+0x5b (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb5a0008c entry_SYSCALL_64_after_hwframe+0x44 (/lib/modules/5.1.0-rc3+/build/vmlinux)
      	7f71e92d03e8 __poll+0x18 (/usr/lib64/libc-2.28.so)
      	55a22960d9c1 [unknown] (/usr/bin/perf)
      	55a22958982a main+0x69a (/usr/bin/perf)
      	7f71e9202413 __libc_start_main+0xf3 (/usr/lib64/libc-2.28.so)
          5541f689495641d7 [unknown] ([unknown])
      Co-developed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NKairui Song <kasong@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: https://lkml.kernel.org/r/20190422162652.15483-1-kasong@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NShile Zhang <shile.zhang@linux.alibaba.com>
      Acked-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      1d3d91ad
  8. 19 12月, 2019 10 次提交
  9. 09 12月, 2019 3 次提交
  10. 08 12月, 2019 10 次提交
    • Y
      mm: thp: move deferred split queue to memcg's nodeinfo · 1d1b4c6c
      Yang Shi 提交于
      The commit 87eaceb3faa59b9b4d940ec9554ce251325d83fe ("mm: thp: make
      deferred split shrinker memcg aware") makes deferred split queue per
      memcg to resolve memcg pre-mature OOM problem.  But, all nodes end up
      sharing the same queue instead of one queue per-node before the commit.
      It is not a big deal for memcg limit reclaim, but it may cause global
      kswapd shrink THPs from a different node.
      
      And, 0-day testing reported -19.6% regression of stress-ng's madvise
      test [1].  I didn't see that much regression on my test box (24 threads,
      48GB memory, 2 nodes), with the same test (stress-ng --timeout 1
      --metrics-brief --sequential 72  --class vm --exclude spawn,exec), I saw
      average -3% (run the same test 10 times then calculate the average since
      the test itself may have most 15% variation according to my test)
      regression sometimes (not every time, sometimes I didn't see regression
      at all).
      
      This might be caused by deferred split queue lock contention.  With some
      configuration (i.e. just one root memcg) the lock contention my be worse
      than before (given 2 nodes, two locks are reduced to one lock).
      
      So, moving deferred split queue to memcg's nodeinfo to make it NUMA
      aware again.
      
      With this change stress-ng's madvise test shows average 4% improvement
      sometimes and I didn't see degradation anymore.
      
      [1]: https://lore.kernel.org/lkml/20190930084604.GC17687@shao2-debian/
      
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NYang Shi <yang.shi@linux.alibaba.com>
      Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
      1d1b4c6c
    • Y
      mm: thp: make deferred split shrinker memcg aware · ace35514
      Yang Shi 提交于
      commit 87eaceb3faa59b9b4d940ec9554ce251325d83fe upstream
      
      Currently THP deferred split shrinker is not memcg aware, this may cause
      premature OOM with some configuration.  For example the below test would
      run into premature OOM easily:
      
      $ cgcreate -g memory:thp
      $ echo 4G > /sys/fs/cgroup/memory/thp/memory/limit_in_bytes
      $ cgexec -g memory:thp transhuge-stress 4000
      
      transhuge-stress comes from kernel selftest.
      
      It is easy to hit OOM, but there are still a lot THP on the deferred
      split queue, memcg direct reclaim can't touch them since the deferred split
      shrinker is not memcg aware.
      
      Convert deferred split shrinker memcg aware by introducing per memcg
      deferred split queue.  The THP should be on either per node or per memcg
      deferred split queue if it belongs to a memcg.  When the page is
      immigrated to the other memcg, it will be immigrated to the target
      memcg's deferred split queue too.
      
      Reuse the second tail page's deferred_list for per memcg list since the
      same THP can't be on multiple deferred split queues.
      
      [yang.shi@linux.alibaba.com: simplify deferred split queue dereference per Kirill Tkhai]
        Link: http://lkml.kernel.org/r/1566496227-84952-5-git-send-email-yang.shi@linux.alibaba.com
      Link: http://lkml.kernel.org/r/1565144277-36240-5-git-send-email-yang.shi@linux.alibaba.comSigned-off-by: NYang Shi <yang.shi@linux.alibaba.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reviewed-by: NKirill Tkhai <ktkhai@virtuozzo.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Qian Cai <cai@lca.pw>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
      ace35514
    • Y
      mm: shrinker: make shrinker not depend on memcg kmem · b382ffa5
      Yang Shi 提交于
      commit 0a432dcbeb32edcd211a5d8f7847d0da7642a8b4 upstream
      
      Currently shrinker is just allocated and can work when memcg kmem is
      enabled.  But, THP deferred split shrinker is not slab shrinker, it
      doesn't make too much sense to have such shrinker depend on memcg kmem.
      It should be able to reclaim THP even though memcg kmem is disabled.
      
      Introduce a new shrinker flag, SHRINKER_NONSLAB, for non-slab shrinker.
      When memcg kmem is disabled, just such shrinkers can be called in
      shrinking memcg slab.
      
      [yang.shi@linux.alibaba.com: add comment]
        Link: http://lkml.kernel.org/r/1566496227-84952-4-git-send-email-yang.shi@linux.alibaba.com
      Link: http://lkml.kernel.org/r/1565144277-36240-4-git-send-email-yang.shi@linux.alibaba.comSigned-off-by: NYang Shi <yang.shi@linux.alibaba.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reviewed-by: NKirill Tkhai <ktkhai@virtuozzo.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Qian Cai <cai@lca.pw>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
      b382ffa5
    • Y
      mm: move mem_cgroup_uncharge out of __page_cache_release() · 79044939
      Yang Shi 提交于
      commit 7ae88534cdd96235cd775c03b32a75009355740b upstream
      
      A later patch makes THP deferred split shrinker memcg aware, but it
      needs page->mem_cgroup information in THP destructor, which is called after
      mem_cgroup_uncharge() now.
      
      So move mem_cgroup_uncharge() from __page_cache_release() to compound
      page destructor, which is called by both THP and other compound pages except
      HugeTLB.  And call it in __put_single_page() for single order page.
      
      Link: http://lkml.kernel.org/r/1565144277-36240-3-git-send-email-yang.shi@linux.alibaba.comSigned-off-by: NYang Shi <yang.shi@linux.alibaba.com>
      Suggested-by: N"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reviewed-by: NKirill Tkhai <ktkhai@virtuozzo.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Qian Cai <cai@lca.pw>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
      79044939
    • Y
      mm: thp: extract split_queue_* into a struct · c9acf2bd
      Yang Shi 提交于
      commit 364c1eebe453f06f0c1e837eb155a5725c9cd272 upstream
      
      Patch series "Make deferred split shrinker memcg aware", v6.
      
      Currently THP deferred split shrinker is not memcg aware, this may cause
      premature OOM with some configuration.  For example the below test would
      run into premature OOM easily:
      
      $ cgcreate -g memory:thp
      $ echo 4G > /sys/fs/cgroup/memory/thp/memory/limit_in_bytes
      $ cgexec -g memory:thp transhuge-stress 4000
      
      transhuge-stress comes from kernel selftest.
      
      It is easy to hit OOM, but there are still a lot THP on the deferred
      split queue, memcg direct reclaim can't touch them since the deferred split
      shrinker is not memcg aware.
      
      Convert deferred split shrinker memcg aware by introducing per memcg
      deferred split queue.  The THP should be on either per node or per memcg
      deferred split queue if it belongs to a memcg.  When the page is
      immigrated to the other memcg, it will be immigrated to the target
      memcg's deferred split queue too.
      
      Reuse the second tail page's deferred_list for per memcg list since the
      same THP can't be on multiple deferred split queues.
      
      Make deferred split shrinker not depend on memcg kmem since it is not
      slab.  It doesn't make sense to not shrink THP even though memcg kmem is
      disabled.
      
      With the above change the test demonstrated above doesn't trigger OOM
      even though with cgroup.memory=nokmem.
      
      This patch (of 4):
      
      Put split_queue, split_queue_lock and split_queue_len into a struct in
      order to reduce code duplication when we convert deferred_split to memcg
      aware in the later patches.
      
      Link: http://lkml.kernel.org/r/1565144277-36240-2-git-send-email-yang.shi@linux.alibaba.comSigned-off-by: NYang Shi <yang.shi@linux.alibaba.com>
      Suggested-by: N"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reviewed-by: NKirill Tkhai <ktkhai@virtuozzo.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Qian Cai <cai@lca.pw>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
      c9acf2bd
    • G
      alios: mm: Support kidled · fd952d8c
      Gavin Shan 提交于
      This enables scanning pages in fixed interval to determine their access
      frequency (hot/cold). The result is exported to user land on basis of
      memory cgroup by "memory.idle_page_stats". The design is highlighted as
      below:
      
         * A kernel thread is spawn when this feature is enabled by writing
           non-zero value to "/sys/kernel/mm/kidled/scan_period_in_seconds".
           The thread sequentially scans the nodes and their pages that have
           been chained up in LRU list.
      
         * For each page, its corresponding age information is stored in the
           page flags or array in node. The age represents the scanning intervals
           in which the page isn't accessed. Also, the page flag (PG_idle) is
           leveraged. The page's age is increased by one if the idle flag isn't
           cleared in two consective scans. Otherwise, the page's age is cleared out.
           Also, the page's age information is cleared when it's free'd so that
           the stale age information won't be fetched when it's allocated.
      
         * Initially, the flag is set, while the access bit in its PTE is cleared
           out by the thread. In next scanning period, its PTE access bit is
           synchronized with the page flag: clear the flag if access bit is set.
           The flag is kept otherwise. For unmapped pages, the flag is cleared
           when it's accessed.
      
         * Eventually, the page's aging information is updated to the unstable
           bucket of its corresponding memory cgroup, taking as statistics. The
           unstable bucket (statistics) is copied to stable bucket when all pages
           in all nodes are scanned for once. The stable bucket (statistics) is
           exported to user land through "memory.idle_page_stats".
      
      TESTING
      =======
      
         * cgroup1, unmapped pagecache
      
           # dd if=/dev/zero of=/ext4/test.data oflag=direct bs=1M count=128
           #
           # echo 1 > /sys/kernel/mm/kidled/use_hierarchy
           # echo 15 > /sys/kernel/mm/kidled/scan_period_in_seconds
           # mkdir -p /cgroup/memory
           # mount -tcgroup -o memory /cgroup/memory
           # echo 1 > /cgroup/memory/memory.use_hierarchy
           # mkdir -p /cgroup/memory/test
           # echo 1 > /cgroup/memory/test/memory.use_hierarchy
           #
           # echo $$ > /cgroup/memory/test/cgroup.procs
           # dd if=/ext4/test.data of=/dev/null bs=1M count=128
           # < wait a few minutes >
           # cat /cgroup/memory/test/memory.idle_page_stats | grep cfei
           # cat /cgroup/memory/test/memory.idle_page_stats | grep cfei
             cfei   0   0   0   134217728   0   0   0   0
           # cat /cgroup/memory/memory.idle_page_stats | grep cfei
             cfei   0   0   0   134217728   0   0   0   0
      
         * cgroup1, mapped pagecache
      
           # < create same file and memory cgroups as above >
           #
           # echo $$ > /cgroup/memory/test/cgroup.procs
           # < run program to mmap the whole created file and access the area >
           # < wait a few minutes >
           # cat /cgroup/memory/test/memory.idle_page_stats | grep cfei
             cfei   0   134217728   0   0   0   0   0   0
           # cat /cgroup/memory/memory.idle_page_stats | grep cfei
             cfei   0   134217728   0   0   0   0   0   0
      
         * cgroup1, mapped and locked pagecache
      
           # < create same file and memory cgroups as above >
           #
           # echo $$ > /cgroup/memory/test/cgroup.procs
           # < run program to mmap the whole created file and mlock the area >
           # < wait a few minutes >
           # cat /cgroup/memory/test/memory.idle_page_stats | grep cfui
             cfui   0   134217728   0   0   0   0   0   0
           # cat /cgroup/memory/memory.idle_page_stats | grep cfui
             cfui   0   134217728   0   0   0   0   0   0
      
         * cgroup1, anonymous and locked area
      
           # < create memory cgroups as above >
           #
           # echo $$ > /cgroup/memory/test/cgroup.procs
           # < run program to mmap anonymous area and mlock it >
           # < wait a few minutes >
           # cat /cgroup/memory/test/memory.idle_page_stats | grep csui
             csui   0   0   134217728   0   0   0   0   0
           # cat /cgroup/memory/memory.idle_page_stats | grep csui
             csui   0   0   134217728   0   0   0   0   0
      
         * Rerun above test cases in cgroup2 and the results are no exceptional.
           However, the cgroups are populated in different way as below:
      
           # mkdir -p /cgroup
           # mount -tcgroup2 none /cgroup
           # echo "+memory" > /cgroup/cgroup.subtree_control
           # mkdir -p /cgroup/test
      Signed-off-by: NGavin Shan <shan.gavin@linux.alibaba.com>
      Reviewed-by: NYang Shi <yang.shi@linux.alibaba.com>
      Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
      fd952d8c
    • Y
      alios: mm: memcontrol: make distance between wmark_low and wmark_high configurable · 33ef4784
      Yang Shi 提交于
      Introduce a new interface, wmark_scale_factor, which defines the
      distance between wmark_high and wmark_low.  The unit is in fractions of
      10,000. The default value of 50 means the distance between wmark_high
      and wmark_low is 0.5% of the max limit of the cgroup.  The maximum value
      is 1000, or 10% of the max limit.
      
      The distance between wmark_low and wmark_high have impact on how hard
      memcg kswapd would reclaim.
      Reviewed-by: NGavin Shan <shan.gavin@linux.alibaba.com>
      Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
      Signed-off-by: NYang Shi <yang.shi@linux.alibaba.com>
      33ef4784
    • Y
      alios: mm: vmscan: make memcg kswapd set memcg state to dirty or writeback · e10c247b
      Yang Shi 提交于
      The global kswapd could set memory node to dirty or writeback if current
      scan find all pages are unqueued dirty or writeback.  Then kswapd would
      write out dirty pages or wait for writeback done.  The memcg kswapd
      behaves like global kswapd, and it should set dirty or writeback state
      to memcg too if the same condition is met.
      
      Since direct reclaim can't write out page caches, the system depends on
      kswapd to write out dirty pages if scan finds too many dirty pages in
      order to avoid pre-mature OOM.  But, if page cache is dirtied too fast,
      writing out pages definitely can't catch up with dirtying pages.  It is
      the responsibility of dirty page balance to throttle dirtying pages.
      Reviewed-by: NGavin Shan <shan.gavin@linux.alibaba.com>
      Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
      Signed-off-by: NYang Shi <yang.shi@linux.alibaba.com>
      e10c247b
    • Y
      alios: mm: memcontrol: treat memcg wmark reclaim work as kswapd · f7c87fa3
      Yang Shi 提交于
      Since background water mark reclaim is scheduled by workqueue, it could
      do more work than direct reclaim, i.e. write out dirty page, etc.
      
      So, add PF_KSWAPD flag, so that current_is_kswapd() would return true
      for memcg background reclaim.  The condition "current_is_kswapd() &&
      !global_reclaim(sc)" is good enough to tell current is global kswapd or
      memcg background reclaim.
      
      And, kswapd is not allowed to break memory.low protection for now, memcg
      kswapd should not break it either.
      Reviewed-by: NGavin Shan <shan.gavin@linux.alibaba.com>
      Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
      Signed-off-by: NYang Shi <yang.shi@linux.alibaba.com>
      f7c87fa3
    • Y
      alios: mm: memcontrol: add background reclaim support for cgroupv2 · 256b5d94
      Yang Shi 提交于
      Like v1, add background reclaim support for cgroup v2.  The interfaces
      are exactly same with v1.  However, if high limit is setup for v2, the
      water mark would be calculated by high limit instead of max limit.
      Reviewed-by: NGavin Shan <shan.gavin@linux.alibaba.com>
      Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
      Signed-off-by: NYang Shi <yang.shi@linux.alibaba.com>
      256b5d94