在Linux中用ps命令看到的每个进程使用的内存是该进程所用到的内存之和,是假定当系统只有该进程运行时,该进程使用的内存。
之所以用ps看的内存报告是“错误的”是因为Linux系统对共享库的处理。系统只装载一份共享库代码,所有使用该库的进程都可以在自己的地址空间中访问到该共享库代码。
比如ps命令报告每个Apache进程可能都用了10M内存,而实际上每个进程的边际代价为1M内存,其余9M是在所有的Apache进程之间共享的。
Understanding memory usage on Linux
This entry is for those people who have ever wondered, "Why the hell is a simple KDE text editor taking up 25 megabytes of memory?" Many people are led to believe that many Linux applications, especially KDE or Gnome programs, are "bloated" based solely upon what tools like ps report. While this may or may not be true, depending on the program, it is not generally true -- many programs are much more memory efficient than they seem.
What ps reports
The ps tool can output various pieces of information about a process, such as its process id, current running state, and resource utilization. Two of the possible outputs are VSZ and RSS, which stand for "virtual set size" and "resident set size", which are commonly used by geeks around the world to see how much memory processes are taking up.
For example, here is the output of ps aux for KEdit on my computer:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
dbunker 3468 0.0 2.7 25400 14452 ? S 20:19 0:00 kdeinit: kedit
According to ps, KEdit has a virtual size of about 25 megabytes and a resident size of about 14 megabytes (both numbers above are reported in kilobytes). It seems that most people like to randomly choose to accept one number or the other as representing the real memory usage of a process. I'm not going to explain the difference between VSZ and RSS right now but, needless to say, this is the wrong approach; neither number is an accurate picture of what the memory cost of running KEdit is.
Why ps is "wrong"
Depending on how you look at it, ps is not reporting the real memory usage of processes. What it is really doing is showing how much real memory each process would take up if it were the only process running. Of course, a typical Linux machine has several dozen processes running at any given time, which means that the VSZ and RSS numbers reported by ps are almost definitely "wrong". In order to understand why, it is necessary to learn how Linux handles shared libraries in programs.
Most major programs on Linux use shared libraries to facilitate certain functionality. For example, a KDE text editing program will use several KDE shared libraries (to allow for interaction with other KDE components), several X libraries (to allow it to display images and copy and pasting), and several general system libraries (to allow it to perform basic operations). Many of these shared libraries, especially commonly used ones like libc, are used by many of the programs running on a Linux system. Due to this sharing, Linux is able to use a great trick: it will load a single copy of the shared libraries into memory and use that one copy for every program that references it.
For better or worse, many tools don't care very much about this very common trick; they simply report how much memory a process uses, regardless of whether that memory is shared with other processes as well. Two programs could therefore use a large shared library and yet have its size count towards both of their memory usage totals; the library is being double-counted, which can be very misleading if you don't know what is going on.
Unfortunately, a perfect representation of process memory usage isn't easy to obtain. Not only do you need to understand how the system really works, but you need to decide how you want to deal with some hard questions. Should a shared library that is only needed for one process be counted in that process's memory usage? If a shared library is used my multiple processes, should its memory usage be evenly distributed among the different processes, or just ignored? There isn't a hard and fast rule here; you might have different answers depending on the situation you're facing. It's easy to see why ps doesn't try harder to report "correct" memory usage totals, given the ambiguity.
Seeing a process's memory map
Enough talk; let's see what the situation is with that "huge" KEdit process. To see what KEdit's memory looks like, we'll use the pmap program (with the -d flag):
Address Kbytes Mode Offset Device Mapping
08048000 40 r-x-- 0000000000000000 0fe:00000 kdeinit
08052000 4 rw--- 0000000000009000 0fe:00000 kdeinit
08053000 1164 rw--- 0000000008053000 000:00000 [ anon ]
40000000 84 r-x-- 0000000000000000 0fe:00000 ld-2.3.5.so
40015000 8 rw--- 0000000000014000 0fe:00000 ld-2.3.5.so
40017000 4 rw--- 0000000040017000 000:00000 [ anon ]
40018000 4 r-x-- 0000000000000000 0fe:00000 kedit.so
40019000 4 rw--- 0000000000000000 0fe:00000 kedit.so
40027000 252 r-x-- 0000000000000000 0fe:00000 libkparts.so.2.1.0
40066000 20 rw--- 000000000003e000 0fe:00000 libkparts.so.2.1.0
4006b000 3108 r-x-- 0000000000000000 0fe:00000 libkio.so.4.2.0
40374000 116 rw--- 0000000000309000 0fe:00000 libkio.so.4.2.0
40391000 8 rw--- 0000000040391000 000:00000 [ anon ]
40393000 2644 r-x-- 0000000000000000 0fe:00000 libkdeui.so.4.2.0
40628000 164 rw--- 0000000000295000 0fe:00000 libkdeui.so.4.2.0
40651000 4 rw--- 0000000040651000 000:00000 [ anon ]
40652000 100 r-x-- 0000000000000000 0fe:00000 libkdesu.so.4.2.0
4066b000 4 rw--- 0000000000019000 0fe:00000 libkdesu.so.4.2.0
4066c000 68 r-x-- 0000000000000000 0fe:00000 libkwalletclient.so.1.0.0
4067d000 4 rw--- 0000000000011000 0fe:00000 libkwalletclient.so.1.0.0
4067e000 4 rw--- 000000004067e000 000:00000 [ anon ]
4067f000 2148 r-x-- 0000000000000000 0fe:00000 libkdecore.so.4.2.0
40898000 64 rw--- 0000000000219000 0fe:00000 libkdecore.so.4.2.0
408a8000 8 rw--- 00000000408a8000 000:00000 [ anon ]
... (trimmed) ...
mapped: 25404K writeable/private: 2432K shared: 0K
I cut out a lot of the output; the rest is similar to what is shown. Even without the complete output, we can see some very interesting things. One important thing to note about the output is that each shared library is listed twice; once for its code segment and once for its data segment. The code segments have a mode of "r-x--", while the data is set to "rw---". The Kbytes, Mode, and Mapping columns are the only ones we will care about, as the rest are unimportant to the discussion.
If you go through the output, you will find that the lines with the largest Kbytes number are usually the code segments of the included shared libraries (the ones that start with "lib" are the shared libraries). What is great about that is that they are the ones that can be shared between processes. If you factor out all of the parts that are shared between processes, you end up with the "writeable/private" total, which is shown at the bottom of the output. This is what can be considered the incremental cost of this process, factoring out the shared libraries. Therefore, the cost to run this instance of KEdit (assuming that all of the shared libraries were already loaded) is around 2 megabytes. That is quite a different story from the 14 or 25 megabytes that ps reported.
What does it all mean?
The moral of this story is that process memory usage on Linux is a complex matter; you can't just run ps and know what is going on. This is especially true when you deal with programs that create a lot of identical children processes, like Apache. ps might report that each Apache process uses 10 megabytes of memory, when the reality might be that the marginal cost of each Apache process is 1 megabyte of memory. This information becomes critial when tuning Apache's MaxClients setting, which determines how many simultaneous requests your server can handle (although see one of my past postings for another way of increasing Apache's performance).
It also shows that it pays to stick with one desktop's software as much as possible. If you run KDE for your desktop, but mostly use Gnome applications, then you are paying a large price for a lot of redundant (but different) shared libraries. By sticking to just KDE or just Gnome apps as much as possible, you reduce your overall memory usage due to the reduced marginal memory cost of running new KDE or Gnome applications, which allows Linux to use more memory for other interesting things (like the file cache, which speeds up file accesses immensely).
分享到:
相关推荐
1.版本:matlab2014/2019a/2024a 2.附赠案例数据可直接运行matlab程序。 3.代码特点:参数化编程、参数可方便更改、代码编程思路清晰、注释明细。 4.适用对象:计算机,电子信息工程、数学等专业的大学生课程设计、期末大作业和毕业设计。
MMC整流器技术解析:基于Matlab的双闭环控制策略与环流抑制性能研究,Matlab下的MMC整流器技术文档:18个子模块,双闭环控制稳定直流电压,环流抑制与最近电平逼近调制,优化桥臂电流波形,高效并网运行。,MMC整流器(Matlab),技术文档 1.MMC工作在整流侧,子模块个数N=18,直流侧电压Udc=25.2kV,交流侧电压6.6kV 2.控制器采用双闭环控制,外环控制直流电压,采用PI调节器,电流内环采用PI+前馈解耦; 3.环流抑制采用PI控制,能够抑制环流二倍频分量; 4.采用最近电平逼近调制(NLM), 5.均压排序:电容电压排序采用冒泡排序,判断桥臂电流方向确定投入切除; 结果: 1.输出的直流电压能够稳定在25.2kV; 2.有功功率,无功功率稳态时波形稳定,有功功率为3.2MW,无功稳定在0Var; 3.网侧电压电流波形均为对称的三相电压和三相电流波形,网侧电流THD=1.47%<2%,符合并网要求; 4.环流抑制后桥臂电流的波形得到改善,桥臂电流THD由9.57%降至1.93%,环流波形也可以看到得到抑制; 5.电容电压能够稳定变化 ,工作点关键词:MMC
Boost二级升压光伏并网结构的Simulink建模与MPPT最大功率点追踪:基于功率反馈的扰动观察法调整电压方向研究,Boost二级升压光伏并网结构的Simulink建模与MPPT最大功率点追踪:基于功率反馈的扰动观察法调整电压方向研究,Boost二级升压光伏并网结构,Simulink建模,MPPT最大功率点追踪,扰动观察法采用功率反馈方式,若ΔP>0,说明电压调整的方向正确,可以继续按原方向进行“干扰”;若ΔP<0,说明电压调整的方向错误,需要对“干扰”的方向进行改变。 ,Boost升压;光伏并网结构;Simulink建模;MPPT最大功率点追踪;扰动观察法;功率反馈;电压调整方向。,光伏并网结构中Boost升压MPPT控制策略的Simulink建模与功率反馈扰动观察法
STM32F103C8T6 USB寄存器开发详解(12)-键盘设备
科技活动人员数专指直接从事科技活动以及专门从事科技活动管理和为科技活动提供直接服务的人员数量
Matlab Simulink仿真探究Flyback反激式开关电源性能表现与优化策略,Matlab Simulink仿真探究Flyback反激式开关电源的工作机制,Matlab Simulimk仿真,Flyback反激式开关电源仿真 ,Matlab; Simulink仿真; Flyback反激式; 开关电源仿真,Matlab Simulink在Flyback反激式开关电源仿真中的应用
基于Comsol的埋地电缆电磁加热计算模型:深度解析温度场与电磁场分布学习资料与服务,COMSOL埋地电缆电磁加热计算模型:温度场与电磁场分布的解析与学习资源,comsol 埋地电缆电磁加热计算模型,可以得到埋地电缆温度场及电磁场分布,提供学习资料和服务, ,comsol;埋地电缆电磁加热计算模型;温度场分布;电磁场分布;学习资料;服务,Comsol埋地电缆电磁加热模型:温度场与电磁场分布学习资料及服务
1、文件内容:ibus-table-chinese-yong-1.4.6-3.el7.rpm以及相关依赖 2、文件形式:tar.gz压缩包 3、安装指令: #Step1、解压 tar -zxvf /mnt/data/output/ibus-table-chinese-yong-1.4.6-3.el7.tar.gz #Step2、进入解压后的目录,执行安装 sudo rpm -ivh *.rpm 4、更多资源/技术支持:公众号禅静编程坊
基于51单片机protues仿真的汽车智能灯光控制系统设计(仿真图、源代码) 一、设计项目 根据本次设计的要求,设计出一款基于51单片机的自动切换远近光灯的设计。 技术条件与说明: 1. 设计硬件部分,中央处理器采用了STC89C51RC单片机; 2. 使用两个灯珠代表远近光灯,感光部分采用了光敏电阻,因为光敏电阻输出的是电压模拟信号,单片机不能直接处理模拟信号,所以经过ADC0832进行转化成数字信号; 3. 显示部分采用了LCD1602液晶,还增加按键部分电路,可以选择手自动切换远近光灯; 4. 用超声模块进行检测距离;
altermanager的企业微信告警服务
MyAgent测试版本在线下载
Comsol技术:可调BIC应用的二氧化钒VO2材料探索,Comsol模拟二氧化钒VO2的可调BIC特性研究,Comsol二氧化钒VO2可调BIC。 ,Comsol; 二氧化钒VO2; 可调BIC,Comsol二氧化钒VO2材料:可调BIC技术的关键应用
C++学生成绩管理系统源码
基于Matlab与Cplex的激励型需求响应模式:负荷转移与电价响应的差异化目标函数解析,基于Matlab与CPLEX的激励型需求响应负荷转移策略探索,激励型需求响应 matlab +cplex 激励型需求响应采用激励型需求响应方式对负荷进行转移,和电价响应模式不同,具体的目标函数如下 ,激励型需求响应; matlab + cplex; 负荷转移; 目标函数。,Matlab与Cplex结合的激励型需求响应模型及其负荷转移策略
scratch介绍(scratch说明).zip
内容概要:本文全面介绍了深度学习模型的概念、工作机制和发展历程,详细探讨了神经网络的构建和训练过程,包括反向传播算法和梯度下降方法。文中还列举了深度学习在图像识别、自然语言处理、医疗和金融等多个领域的应用实例,并讨论了当前面临的挑战,如数据依赖、计算资源需求、可解释性和对抗攻击等问题。最后,文章展望了未来的发展趋势,如与量子计算和区块链的融合,以及在更多领域的应用前景。 适合人群:对该领域有兴趣的技术人员、研究人员和学者,尤其适合那些希望深入了解深度学习原理和技术细节的读者。 使用场景及目标:①理解深度学习模型的基本原理和结构;②了解深度学习模型的具体应用案例;③掌握应对当前技术挑战的方向。 阅读建议:文章内容详尽丰富,读者应在阅读过程中注意理解各个关键技术的概念和原理,尤其是神经网络的构成及训练过程。同时也建议对比不同模型的特点及其在具体应用中的表现。
该文档提供了一个关于供应链管理系统开发的详细指南,重点介绍了项目安排、技术实现和框架搭建的相关内容。 文档分为以下几个关键部分: 项目安排:主要步骤包括搭建框架(1天),基础数据模块和权限管理(4天),以及应收应付和销售管理(5天)。 供应链概念:供应链系统的核心流程是通过采购商品放入仓库,并在销售时从仓库提取商品,涉及三个主要订单:采购订单、销售订单和调拨订单。 大数据的应用:介绍了数据挖掘、ETL(数据抽取)和BI(商业智能)在供应链管理中的应用。 技术实现:讲述了DAO(数据访问对象)的重用、服务层的重用、以及前端JS的继承机制、jQuery插件开发等技术细节。 系统框架搭建:包括Maven环境的配置、Web工程的创建、持久化类和映射文件的编写,以及Spring配置文件的实现。 DAO的需求和功能:供应链管理系统的各个模块都涉及分页查询、条件查询、删除、增加、修改操作等需求。 泛型的应用:通过示例说明了在Java语言中如何使用泛型来实现模块化和可扩展性。 文档非常技术导向,适合开发人员参考,用于构建供应链管理系统的架构和功能模块。
这份长达104页的手册由清华大学新闻与传播学院新媒体研究中心元宇宙文化实验室的余梦珑博士后及其团队精心编撰,内容详尽,覆盖了从基础概念、技术原理到实战案例的全方位指导。它不仅适合初学者快速了解DeepSeek的基本操作,也为有经验的用户提供了高级技巧和优化策略。
主题说明: 1、将mxtheme目录放置根目录 | 将mxpro目录放置template文件夹中 2、苹果cms后台-系统-网站参数配置-网站模板-选择mxpro 模板目录填写html 3、网站模板选择好之后一定要先访问前台,然后再进入后台设置 4、主题后台地址: MXTU MAX图图主题,/admin.php/admin/mxpro/mxproset admin.php改成你登录后台的xxx.php 5、首页幻灯片设置视频推荐9,自行后台设置 6、追剧周表在视频数据中,节目周期添加周一至周日自行添加,格式:一,二,三,四,五,六,日
运行GUI版本,可二开