一、一些概念
1. OOM killer
顾名思义,OOM(out of memory) killer,是Linux操作系统发现内存不足时,它会强制杀死一些用户进程(非内核进程),来保证系统有足够的物理内存进行分配。
2. 内存overcommit
Linux对大部分申请内存的请求都回复"yes",以便能跑更多更大的程序。因为申请内存后,并不会马上使用内存。这种技术叫做Overcommit。
vm.overcommit_memory这个系统参数是用来设置内存分配策略的,它有三个可选值:
vm.overcommit_memory | 含义 |
0 |
表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。 |
1 |
表示内核允许分配所有的可用内存,而不管当前的内存状态如何。 |
2 |
表示内核允许分配超过所有物理内存和交换空间总和的内存 |
OOM killer与内存overcommit:两者是相互作用的:
对于一台16G的Linux服务器。
如果使用overcommit_memory=0,如果没有没有足够的内存分配,那么应用想分配内存就会失败。
如果使用overcommit_memory>0,每次的应用内存申请都会成功,这种成功是建立在OOM killer会杀掉部分进程来实现的。
二、试验:
1. 环境:
(1) 内存:没有开启swap,可用内存12397M
total used free shared buffers cached Mem: 15948 3606 12342 0 10 45 -/+ buffers/cache: 3550 12397 Swap: 0 0 0
(2) overcommit_memory当前是0
cat /proc/sys/vm/overcommit_memory 0
2. 开启应用强制吃内存。
我对于redis稍微熟悉一点,redis-server有个参数叫做--test-memory,可以吃掉系统的内存,单位是M
redis-server --test-memory 1024
(1) 开启一个redis-server --test-memory 5120,吃掉5G内存,内存变为:
[@zw_53_159 fl]# free -m total used free shared buffers cached Mem: 15948 8704 7243 0 10 47 -/+ buffers/cache: 8646 7301 Swap: 0 0 0
(2) 再开启一个redis-server --test-memory 5120,吃掉5G内存,内存变为:
[@zw_53_159 fl]# free -m total used free shared buffers cached Mem: 15948 13839 2109 0 10 47 -/+ buffers/cache: 13781 2166 Swap: 0 0 0
(3) 此时还有2166可用内存,如果在开启3G的Redis会怎样呢?
[@zw_53_159 ~]# redis-server --test-memory 3072 Unable to allocate 3072 megabytes: Cannot allocate memory没有足够的内存了?为什么呢:
- 没有swap可用
- overcommit_memory=0,不会用overcommit机制。
(4) 修改overcommit_memory=1,
echo "vm.overcommit_memory = 1" >> /etc/sysctl.conf sysctl vm.overcommit_memory=1
(a) 当前系统只有一些Redis进程:
root 26738 21729 99 15:03 pts/5 00:06:10 redis-server --test-memory 5120 root 26910 22005 99 15:04 pts/7 00:05:34 redis-server --test-memory 5120
(b) 观察OOM killer日志:
tail -f /var/log/messages
继续开启3G的Redis: 开启成功,但是OOM把之前一个redis-server给kill掉了,从两个方面看
(a) 进程:进程26738不见了。
root 26910 22005 99 15:04 pts/7 00:08:46 redis-server --test-memory 5120 root 27805 21964 93 15:12 pts/6 00:00:27 redis-server --test-memory 3072
(b) OOM killer日志:里面涉及到细节很多,例如计算用户进程的分数,这个分数(oom_score_adj)决定该用户进程被杀掉的可能性,
最后算出:Out of memory: Kill process 26738 (redis-server) score 291 or sacrifice child Killed process 26738, UID 0, (redis-server) total-vm:5359844kB, anon-rss:5247132kB, file-rss:16kB
Feb 17 15:12:53 zw_53_159 kernel: redis-server invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 Feb 17 15:12:53 zw_53_159 kernel: redis-server cpuset=/ mems_allowed=0 Feb 17 15:12:53 zw_53_159 kernel: Pid: 32518, comm: redis-server Not tainted 2.6.32-279.el6.x86_64 #1 Feb 17 15:12:53 zw_53_159 kernel: Call Trace: Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff810c4971>] ? cpuset_print_task_mems_allowed+0x91/0xb0 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff811170e0>] ? dump_header+0x90/0x1b0 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff812146fc>] ? security_real_capable_noaudit+0x3c/0x70 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff81117562>] ? oom_kill_process+0x82/0x2a0 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff811174a1>] ? select_bad_process+0xe1/0x120 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff811179a0>] ? out_of_memory+0x220/0x3c0 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff811276be>] ? __alloc_pages_nodemask+0x89e/0x940 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff8115c1da>] ? alloc_pages_current+0xaa/0x110 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff811144e7>] ? __page_cache_alloc+0x87/0x90 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff8112a10b>] ? __do_page_cache_readahead+0xdb/0x210 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff8112a261>] ? ra_submit+0x21/0x30 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff81115813>] ? filemap_fault+0x4c3/0x500 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff8113ec14>] ? __do_fault+0x54/0x510 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff8113f1c7>] ? handle_pte_fault+0xf7/0xb50 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff8113fe04>] ? handle_mm_fault+0x1e4/0x2b0 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff81044479>] ? __do_page_fault+0x139/0x480 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff81039678>] ? pvclock_clocksource_read+0x58/0xd0 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff8103876c>] ? kvm_clock_read+0x1c/0x20 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff81038779>] ? kvm_clock_get_cycles+0x9/0x10 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff8109c9a0>] ? getnstimeofday+0x60/0xf0 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff8150326e>] ? do_page_fault+0x3e/0xa0 Feb 17 15:12:53 zw_53_159 kernel: [<ffffffff81500625>] ? page_fault+0x25/0x30 Feb 17 15:12:53 zw_53_159 kernel: Mem-Info: Feb 17 15:12:53 zw_53_159 kernel: Node 0 DMA per-cpu: Feb 17 15:12:53 zw_53_159 kernel: CPU 0: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 1: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 2: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 3: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 4: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 5: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 6: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 7: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 8: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 9: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 10: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 11: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 12: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 13: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 14: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 15: hi: 0, btch: 1 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: Node 0 DMA32 per-cpu: Feb 17 15:12:53 zw_53_159 kernel: CPU 0: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 1: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 2: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 3: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 4: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 5: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 6: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 7: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 8: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 9: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 10: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 11: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 12: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 13: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 14: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 15: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: Node 0 Normal per-cpu: Feb 17 15:12:53 zw_53_159 kernel: CPU 0: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 1: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 2: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 3: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 4: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 5: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 6: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 7: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 8: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 9: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 10: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 11: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 12: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 13: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 14: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: CPU 15: hi: 186, btch: 31 usd: 0 Feb 17 15:12:53 zw_53_159 kernel: active_anon:3223340 inactive_anon:756981 isolated_anon:9 Feb 17 15:12:53 zw_53_159 kernel: active_file:234 inactive_file:91 isolated_file:0 Feb 17 15:12:53 zw_53_159 kernel: unevictable:0 dirty:0 writeback:0 unstable:0 Feb 17 15:12:53 zw_53_159 kernel: free:33241 slab_reclaimable:2879 slab_unreclaimable:8604 Feb 17 15:12:53 zw_53_159 kernel: mapped:139 shmem:55 pagetables:10616 bounce:0 Feb 17 15:12:53 zw_53_159 kernel: Node 0 DMA free:15676kB min:60kB low:72kB high:88kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15272kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Feb 17 15:12:53 zw_53_159 kernel: lowmem_reserve[]: 0 3512 16137 16137 Feb 17 15:12:53 zw_53_159 kernel: Node 0 DMA32 free:65068kB min:14692kB low:18364kB high:22036kB active_anon:3141792kB inactive_anon:404kB active_file:48kB inactive_file:32kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3596496kB mlocked:0kB dirty:0kB writeback:0kB mapped:80kB shmem:4kB slab_reclaimable:204kB slab_unreclaimable:48kB kernel_stack:16kB pagetables:276kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Feb 17 15:12:53 zw_53_159 kernel: lowmem_reserve[]: 0 0 12625 12625 Feb 17 15:12:53 zw_53_159 kernel: Node 0 Normal free:52220kB min:52824kB low:66028kB high:79236kB active_anon:9751568kB inactive_anon:3027520kB active_file:888kB inactive_file:332kB unevictable:0kB isolated(anon):36kB isolated(file):0kB present:12928000kB mlocked:0kB dirty:0kB writeback:0kB mapped:476kB shmem:216kB slab_reclaimable:11312kB slab_unreclaimable:34368kB kernel_stack:3472kB pagetables:42188kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:146 all_unreclaimable? no Feb 17 15:12:53 zw_53_159 kernel: lowmem_reserve[]: 0 0 0 0 Feb 17 15:12:53 zw_53_159 kernel: Node 0 DMA: 3*4kB 2*8kB 0*16kB 1*32kB 2*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15676kB Feb 17 15:12:53 zw_53_159 kernel: Node 0 DMA32: 81*4kB 73*8kB 64*16kB 48*32kB 36*64kB 31*128kB 29*256kB 30*512kB 30*1024kB 1*2048kB 0*4096kB = 65292kB Feb 17 15:12:53 zw_53_159 kernel: Node 0 Normal: 753*4kB 473*8kB 314*16kB 216*32kB 162*64kB 86*128kB 31*256kB 10*512kB 0*1024kB 0*2048kB 0*4096kB = 53164kB Feb 17 15:12:53 zw_53_159 kernel: 335 total pagecache pages Feb 17 15:12:53 zw_53_159 kernel: 0 pages in swap cache Feb 17 15:12:53 zw_53_159 kernel: Swap cache stats: add 53710499, delete 53710499, find 25961598/29739317 Feb 17 15:12:53 zw_53_159 kernel: Free swap = 0kB Feb 17 15:12:53 zw_53_159 kernel: Total swap = 0kB Feb 17 15:12:53 zw_53_159 kernel: 4194303 pages RAM Feb 17 15:12:53 zw_53_159 kernel: 111523 pages reserved Feb 17 15:12:53 zw_53_159 kernel: 1456 pages shared Feb 17 15:12:53 zw_53_159 kernel: 4044970 pages non-shared Feb 17 15:12:53 zw_53_159 kernel: [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name Feb 17 15:12:53 zw_53_159 kernel: [ 651] 0 651 2750 188 0 -17 -1000 udevd Feb 17 15:12:53 zw_53_159 kernel: [ 1494] 0 1494 1538 27 1 0 0 portreserve Feb 17 15:12:53 zw_53_159 kernel: [ 1501] 0 1501 62992 1755 0 0 0 rsyslogd Feb 17 15:12:53 zw_53_159 kernel: [ 1518] 0 1518 2763 194 0 -17 -1000 udevd Feb 17 15:12:53 zw_53_159 kernel: [ 1543] 0 1543 2285 55 2 0 0 irqbalance Feb 17 15:12:53 zw_53_159 kernel: [ 1561] 32 1561 4742 58 0 0 0 rpcbind Feb 17 15:12:53 zw_53_159 kernel: [ 1746] 0 1746 19667 210 4 0 0 master Feb 17 15:12:53 zw_53_159 kernel: [ 1766] 89 1766 19729 213 0 0 0 qmgr Feb 17 15:12:53 zw_53_159 kernel: [ 1778] 0 1778 27541 36 0 0 0 abrtd Feb 17 15:12:53 zw_53_159 kernel: [ 1786] 0 1786 27016 53 6 0 0 abrt-dump-oops Feb 17 15:12:53 zw_53_159 kernel: [ 1810] 0 1810 1754 27 8 0 0 rhsmcertd Feb 17 15:12:53 zw_53_159 kernel: [ 1855] 0 1855 1014 21 5 0 0 mingetty Feb 17 15:12:53 zw_53_159 kernel: [ 1857] 0 1857 1014 22 2 0 0 mingetty Feb 17 15:12:53 zw_53_159 kernel: [ 1859] 0 1859 1014 22 2 0 0 mingetty Feb 17 15:12:53 zw_53_159 kernel: [ 1861] 0 1861 1014 21 5 0 0 mingetty Feb 17 15:12:53 zw_53_159 kernel: [ 1862] 0 1862 2749 187 9 -17 -1000 udevd Feb 17 15:12:53 zw_53_159 kernel: [ 1864] 0 1864 1014 22 3 0 0 mingetty Feb 17 15:12:53 zw_53_159 kernel: [ 1865] 0 1865 1017 22 0 0 0 agetty Feb 17 15:12:53 zw_53_159 kernel: [ 1867] 0 1867 1014 22 13 0 0 mingetty Feb 17 15:12:53 zw_53_159 kernel: [ 2502] 0 2502 9198 108 12 0 0 keepalived Feb 17 15:12:53 zw_53_159 kernel: [ 2503] 0 2503 9724 140 4 0 0 keepalived Feb 17 15:12:53 zw_53_159 kernel: [ 2504] 0 2504 9724 127 6 0 0 keepalived Feb 17 15:12:53 zw_53_159 kernel: [ 2695] 0 2695 9198 108 8 0 0 keepalived Feb 17 15:12:53 zw_53_159 kernel: [ 2696] 0 2696 9724 141 8 0 0 keepalived Feb 17 15:12:53 zw_53_159 kernel: [ 2697] 0 2697 9724 128 12 0 0 keepalived Feb 17 15:12:53 zw_53_159 kernel: [ 2515] 0 2515 5066 127 0 0 0 nginx Feb 17 15:12:53 zw_53_159 kernel: [31653] 0 31653 1561 24 0 0 0 oscmd Feb 17 15:12:53 zw_53_159 kernel: [16865] 99 16865 5166 225 8 0 0 nginx Feb 17 15:12:53 zw_53_159 kernel: [16866] 99 16866 5166 225 0 0 0 nginx Feb 17 15:12:53 zw_53_159 kernel: [16867] 99 16867 5166 225 12 0 0 nginx Feb 17 15:12:53 zw_53_159 kernel: [16868] 99 16868 5166 225 2 0 0 nginx Feb 17 15:12:53 zw_53_159 kernel: [16869] 99 16869 5166 225 15 0 0 nginx Feb 17 15:12:53 zw_53_159 kernel: [16870] 99 16870 5166 225 5 0 0 nginx Feb 17 15:12:53 zw_53_159 kernel: [16871] 99 16871 5166 225 0 0 0 nginx Feb 17 15:12:53 zw_53_159 kernel: [16872] 99 16872 5166 225 2 0 0 nginx Feb 17 15:12:53 zw_53_159 kernel: [26947] 0 26947 16018 166 0 -17 -1000 sshd Feb 17 15:12:53 zw_53_159 kernel: [ 2739] 0 2739 29303 159 10 0 0 crond Feb 17 15:12:53 zw_53_159 kernel: [ 6394] 0 6394 19966 172 0 0 0 monitor Feb 17 15:12:53 zw_53_159 kernel: [ 6401] 0 6401 64605 3298 14 0 0 monitor_cron Feb 17 15:12:53 zw_53_159 kernel: [ 6402] 0 6402 19529 117 0 0 0 monitor_agent Feb 17 15:12:53 zw_53_159 kernel: [31724] 0 31724 4816 40 2 0 0 nutcracker Feb 17 15:12:53 zw_53_159 kernel: [ 7144] 0 7144 34895 1490 4 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [31641] 0 31641 34361 1517 10 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [31649] 0 31649 34361 2098 14 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [32512] 0 32512 34361 1614 7 0 0 redis-sentinel Feb 17 15:12:53 zw_53_159 kernel: [32518] 0 32518 34361 1614 0 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [32524] 0 32524 34361 1614 13 0 0 redis-sentinel Feb 17 15:12:53 zw_53_159 kernel: [ 7333] 0 7333 34361 1507 4 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [17108] 0 17108 180793 1783 13 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [21470] 0 21470 851513 785994 7 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [ 5708] 0 5708 34361 2125 6 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [ 5716] 0 5716 34361 2061 4 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [ 5724] 0 5724 34361 2122 10 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [ 5732] 0 5732 34361 2124 7 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [ 6214] 0 6214 34361 2091 6 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [29556] 0 29556 34361 1563 4 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [32104] 0 32104 34361 1614 13 0 0 redis-sentinel Feb 17 15:12:53 zw_53_159 kernel: [32539] 0 32539 34361 1614 11 0 0 redis-sentinel Feb 17 15:12:53 zw_53_159 kernel: [14316] 0 14316 442937 8430 4 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [32038] 0 32038 24454 234 0 0 0 sshd Feb 17 15:12:53 zw_53_159 kernel: [32042] 0 32042 27119 128 1 0 0 bash Feb 17 15:12:53 zw_53_159 kernel: [ 3901] 0 3901 24454 231 0 0 0 sshd Feb 17 15:12:53 zw_53_159 kernel: [ 3905] 0 3905 27118 117 5 0 0 bash Feb 17 15:12:53 zw_53_159 kernel: [19921] 89 19921 19691 206 6 0 0 pickup Feb 17 15:12:53 zw_53_159 kernel: [20382] 0 20382 24454 235 0 0 0 sshd Feb 17 15:12:53 zw_53_159 kernel: [20386] 0 20386 27119 116 10 0 0 bash Feb 17 15:12:53 zw_53_159 kernel: [20412] 0 20412 25235 22 6 0 0 tail Feb 17 15:12:53 zw_53_159 kernel: [20413] 0 20413 25813 34 6 0 0 grep Feb 17 15:12:53 zw_53_159 kernel: [21515] 0 21515 24454 236 0 0 0 sshd Feb 17 15:12:53 zw_53_159 kernel: [21519] 0 21519 27119 119 0 0 0 bash Feb 17 15:12:53 zw_53_159 kernel: [21725] 0 21725 24454 237 13 0 0 sshd Feb 17 15:12:53 zw_53_159 kernel: [21729] 0 21729 27119 117 1 0 0 bash Feb 17 15:12:53 zw_53_159 kernel: [21960] 0 21960 24454 234 10 0 0 sshd Feb 17 15:12:53 zw_53_159 kernel: [21964] 0 21964 27119 125 5 0 0 bash Feb 17 15:12:53 zw_53_159 kernel: [22001] 0 22001 24454 247 13 0 0 sshd Feb 17 15:12:53 zw_53_159 kernel: [22005] 0 22005 27119 118 0 0 0 bash Feb 17 15:12:53 zw_53_159 kernel: [22039] 0 22039 24454 234 10 0 0 sshd Feb 17 15:12:53 zw_53_159 kernel: [22043] 0 22043 27119 127 8 0 0 bash Feb 17 15:12:53 zw_53_159 kernel: [26738] 0 26738 1339961 1311787 1 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [26910] 0 26910 1339961 1311787 14 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: [27566] 0 27566 25235 22 10 0 0 tail Feb 17 15:12:53 zw_53_159 kernel: [27567] 0 27567 25813 32 10 0 0 grep Feb 17 15:12:53 zw_53_159 kernel: [27805] 0 27805 815673 522185 13 0 0 redis-server Feb 17 15:12:53 zw_53_159 kernel: Out of memory: Kill process 26738 (redis-server) score 291 or sacrifice child Feb 17 15:12:53 zw_53_159 kernel: Killed process 26738, UID 0, (redis-server) total-vm:5359844kB, anon-rss:5247132kB, file-rss:16kB ^C
三、结论:
overcommit机制虽然能够保证,每次申请内存都能成功,但是也存在用户进程被OOM killer的可能性,如果设置也会存在很大问题。
具体要设置成0或者>0,还是要取决于场景,注意本实验没有开启swap。
测试服务器关闭swap的方法是swapoff -a,未在线上测试过会有什么影响,请小心使用。
四、后续测试:
1. 添加了swap之后,内存概括如下:
[@zw_53_159 ~]# free -m total used free shared buffers cached Mem: 15948 361 15587 0 3 13 -/+ buffers/cache: 344 15604 Swap: 16383 0 16383
2. 使用第二节进行测试,发现OOM killer是针对可用内存(物理内存+swap)才生效,不像网上有些文章说的只要物理内存不够,就执行OOM killer
五、参考文献
有关swap开关可以参考:Linux服务端swap配置和监控
相关推荐
OOM Killer,全称为Out of Memory Killer,是Linux内核中的一种机制,用于处理系统内存不足的情况。当系统内存耗尽时,为了避免整个系统的崩溃,OOM Killer会选择并终止一些进程来释放内存,从而确保系统的稳定运行...
Linux系统的OOM Killer处理机制 Linux系统的OOM(Out of Memory)Killer处理机制是一种内核机制,用于在系统内存不足时杀掉某个进程以腾出内存留给系统用,不致于让系统立刻崩溃。OOM Killer的触发条件是系统内存...
OOM Killer的选择过程并不简单地基于哪个进程消耗内存最多就杀死哪个。Linux内核通过计算每个进程的`oom_score`来确定哪些进程应该被优先终止。`oom_score`是内核函数`oom_badness()`计算的结果,它综合考虑了多个...
当Linux系统面临内存不足的压力时,内核会启动一个称为“Out-of-Memory (OOM) Killer”的机制来防止系统崩溃。然而,OOM Killer可能会选择终止重要的进程,包括Erlang应用程序。针对这种情况,有一个名为“heart_oom...
《理解Linux OOM Killer机制:从lab8_oom实验入手》 Linux操作系统的OOM (Out-Of-Memory) Killer是一个重要的内存管理机制,用于处理系统内存不足的情况。当系统资源极度紧张,无法分配新的内存时,OOM Killer会...
Mar 9 11:29:16 xxxxxx kernel: mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 Mar 9 11:29:16 xxxxxx kernel: mysqld cpuset=/ mems_allowed=0 Mar 9 11:29:16 x
在Linux系统中,当系统内存不足时,OOM killer会被触发,它会选择并终止某些进程以回收内存,从而防止系统完全崩溃。描述提到的“oom_adj”是Linux内核中的一个参数,用于调整进程的OOM分数。将其设置为-17意味着这...
1 概述 Android的设计理念之一,便是应用程序退出,但进程还会继续存在系统以便...Android基于Linux的系统,其实Linux有类似的内存管理策略——OOM killer,全称(Out Of Memory Killer), OOM的策略更多的是用于分配内存
例如,理解Linux内核的oom killer机制,它根据每个进程的oom_score来决定优先杀死哪个进程。oom_score是基于进程的内存使用情况和其他因素计算出来的,较高的值表示更容易被杀死。 此外,内核层面的优化还包括调整...
在后续章节中,我们将深入探讨Linux OOM Killer的工作机制,以及如何避免MySQL因内存不足而被杀死的情况。了解并正确配置这些设置对于确保RDS(如网易云关系数据库服务)的稳定运行至关重要。通过优化内存管理和理解...
相比传统的Linux内核中的Out-of-Memory Killer(简称OOM Killer),LMK提供了一种更加灵活的方式来处理低内存状况。它不仅考虑了进程占用的物理内存数量,还引入了一个名为`oom_adj`的指标来决定哪些进程应该被优先...
现在是早晨6点钟。已经醒来的我正在总结到底是什么事情使得我的起床闹铃提前了这么多。故事刚开始的时候,手机铃声恰好停止。又困又烦躁的我看了下手机,看看是不是我自己疯了把闹钟调得这么早,居然是早晨5点。...
- **不够智能**:oom killer的选择标准相对简单,无法根据Android应用的特点进行优化。 **1.3.3. LMK 概述** LMK是Android为了解决上述问题而设计的一种机制。它通过更加智能的方式管理内存,确保在内存紧张时能够...
oom-killer通常在Linux用户中享有不良声誉。 这可能是Linux仅在绝对没有其他选择时才调用它的部分原因。 它将换出桌面环境,删除整个页面缓存,并在最终终止进程之前清空每个缓冲区。 至少那是我认为的做法。 我坐...
Rxjava简单的示例2时序图.oom
oom killer在系统面临内存耗尽时,会选择杀死一些低优先级的进程,以保障关键服务的正常运行。 总的来说,理解Linux虚拟内存管理对于系统管理员、内核开发者和性能优化工程师至关重要。通过深入学习这一主题,我们...
当应用程序出现Out of Memory (OOM)错误时,通常意味着系统无法分配足够的内存来执行任务,这时就需要借助专业的分析工具来查找问题的根源。MemoryAnalyzer(MAT)是IBM开发的一款强大的JVM堆内存分析工具,它能够...