`
pcpig
  • 浏览: 91253 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

Understanding Linux CPU Load

 
阅读更多

 

Understanding Linux CPU Load - when should you be worried?

POSTED IN EXAMPLES | Comments 16 COMMENTS

 

You might be familiar with Linux load averages already. Load averages are the three numbers shown with the uptime and top commands - they look like this:

load average: 0.09, 0.05, 0.01

Most people have an inkling of what the load averages mean: the three numbers represent averages over progressively longer periods of time (one, five, and fifteen minute averages), and that lower numbers are better. Higher numbers represent a problem or an overloaded machine. But, what's the the threshold? What constitutes "good" and "bad" load average values? When should you be concerned over a load average value, and when should you scramble to fix it ASAP?

First, a little background on what the load average values mean. We'll start out with the simplest case: a machine with one single-core processor.

The traffic analogy

A single-core CPU is like a single lane of traffic. Imagine you are a bridge operator ... sometimes your bridge is so busy there are cars lined up to cross. You want to let folks know how traffic is moving on your bridge. A decent metric would be how many cars are waiting at a particular time. If no cars are waiting, incoming drivers know they can drive across right away. If cars are backed up, drivers know they're in for delays.

So, Bridge Operator, what numbering system are you going to use? How about:

  • 0.00 means there's no traffic on the bridge at all. In fact, between 0.00 and 1.00 means there's no backup, and an arriving car will just go right on.
  • 1.00 means the bridge is exactly at capacity. All is still good, but if traffic gets a little heavier, things are going to slow down.
  • over 1.00 means there's backup. How much? Well, 2.00 means that there are two lanes worth of cars total -- one lane's worth on the bridge, and one lane's worth waiting. 3.00 means there are three lane's worth total -- one lane's worth on the bridge, and two lanes' worth waiting. Etc.

 = load of 1.00

 = load of 0.50

 = load of 1.70



This is basically what CPU load is. "Cars" are processes using a slice of CPU time ("crossing the bridge") or queued up to use the CPU. Unix refers to this as the run-queue length: the sum of the number of processes that are currently running plus the number that are waiting (queued) to run.

Like the bridge operator, you'd like your cars/processes to never be waiting. So, your CPU load should ideally stay below 1.00. Also like the bridge operator, you are still ok if you get some temporary spikes above 1.00 ... but when you're consistently above 1.00, you need to worry.

So you're saying the ideal load is 1.00?

Well, not exactly. The problem with a load of 1.00 is that you have no headroom. In practice, many sysadmins will draw a line at 0.70:

  • The "Need to Look into it" Rule of Thumb: 0.70 If your load average is staying above > 0.70, it's time to investigate before things get worse.

  • The "Fix this now" Rule of Thumb: 1.00. If your load average stays above 1.00, find the problem and fix it now. Otherwise, you're going to get woken up in the middle of the night, and it's not going to be fun.

  • The "Arrgh, it's 3AM WTF?" Rule of Thumb: 5.0. If your load average is above 5.00, you could be in serious trouble, your box is either hanging or slowing way down, and this will (inexplicably) happen in the worst possible time like in the middle of the night or when you're presenting at a conference. Don't let it get there.

What about Multi-processors? My load says 3.00, but things are running fine!

Got a quad-processor system? It's still healthy with a load of 3.00.

On multi-processor system, the load is relative to the number of processor cores available. The "100% utilization" mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core, etc.

If we go back to the bridge analogy, the "1.00" really means "one lane's worth of traffic". On a one-lane bridge, that means it's filled up. On a two-late bridge, a load of 1.00 means its at 50% capacity -- only one lane is full, so there's another whole lane that can be filled.

 = load of 2.00 on two-lane road

Same with CPUs: a load of 1.00 is 100% CPU utilization on single-core box. On a dual-core box, a load of 2.00 is 100% CPU utilization.

Multicore vs. multiprocessor

While we're on the topic, let's talk about multicore vs. multiprocessor. For performance purposes, is a machine with a single dual-core processor basically equivalent to a machine with two processors with one core each? Yes. Roughly. There are lots of subtleties here concerning amount of cache, frequency of process hand-offs between processors, etc. Despite those finer points, for the purposes of sizing up the CPU load value, the total number of cores is what matters, regardless of how many physical processors those cores are spread across.

Which leads us to a two new Rules of Thumb:

  • The "number of cores = max load" Rule of Thumb: on a multicore system, your load should not exceed the number of cores available.

  • The "cores is cores" Rule of Thumb: How the cores are spread out over CPUs doesn't matter. Two quad-cores == four dual-cores == eight single-cores. It's all eight cores for these purposes.

Bringing It Home

Let's take a look at the load averages output from uptime:

~ $ uptime
23:05 up 14 days, 6:08, 7 users, load averages: 0.65 0.42 0.36

This is on a dual-core CPU, so we've got lots of headroom. I won't even think about it until load gets and stays above 1.7 or so.

Now, what about those three numbers? 0.65 is the average over the last minute, 0.42 is the average over the last five minutes, and 0.36 is the average over the last 15 minutes. Which brings us to the question:

Which average should I be observing? One, five, or 15 minute?

For the numbers we've talked about (1.00 = fix it now, etc), you should be looking at the five or 15-minute averages. Frankly, if your box spikes above 1.0 on the one-minute average, you're still fine. It's when the 15-minute average goes north of 1.0 and stays there that you need to snap to. (obviously, as we've learned, adjust these numbers to the number of processor cores your system has).

So # of cores is important to interpreting load averages ... how do I know how many cores my system has?

cat /proc/cpuinfo to get info on each processor in your system. Note: not available on OSX, Google for alternatives. To get just a count, run it through grepand word count: grep 'model name' /proc/cpuinfo | wc -l

Monitoring Linux CPU Load with Scout

Scout provides 2 ways to modify the CPU load. Our original server load plugin andJesse Newland's Load-Per-Processor plugin both report the CPU load and alert you when the load peaks and/or is trending in the wrong direction:

load alert

 

分享到:
评论

相关推荐

    深入理解Linux网络内幕(Understanding Linux Network Internals)

    英文版 Understanding Linux Network Internals

    Understanding the Linux Kernel pdf

    4. 《UnderStanding The Linux Kernel 3rd Edition V413HAV.pdf》:这正是标题中提到的《理解Linux内核》第三版,书中详尽介绍了Linux 2.6.x版本内核的设计和实现,包括调度程序、内存管理、文件系统、网络协议栈等...

    understanding linux network internals

    《Understanding Linux Network Internals》一书,旨在深入浅出地探讨Linux内核网络协议的实现细节,对于希望深入了解Linux网络机制的读者来说,是一本不可多得的参考书。 本书主要分为两大部分内容:通用背景知识...

    Understanding linux network internals

    在上述信息中提到的书籍“Understanding Linux Network Internals”作者为Christian Benvenuti,这本书详细讲解了Linux网络相关的数据结构、函数以及机制和使用方法。这对于深入理解Linux网络编程和驱动开发有着重要...

    Understanding Linux Network Internal

    《Understanding Linux Network Internals》一书详细地阐述了Linux内核中的网络协议栈是如何工作的,内容丰富、详尽,成为不少专业人员的重要参考资料。 书中所讲的Linux网络协议栈,是操作系统内核的一部分,负责...

    Linux kernel Architecture+Understanding Linux kernel

    《Linux Kernel Architecture + Understanding Linux Kernel》是一套深入探讨Linux内核和系统编程的资源,适合对操作系统原理感兴趣的读者,特别是那些计划开发Linux设备驱动程序的人。这套资料包含三本书:...

    understanding linux network internals.pdf

    《Understanding Linux Network Internals》是一本专注于Linux内核网络子系统的专业书籍,由Christian Benvenuti撰写,2006年由O'Reilly Media出版。该书不仅提供了对Linux内核网络层细节的深入理解,而且还包括网络...

    understanding linux kernel 1.pdf

    《理解Linux内核》这本书由Daniel P. Bovet与Marco Cesati撰写,于2000年10月由O'Reilly出版社出版,书号为0-596-00002-2,全书共有702页。这本书旨在帮助读者深入理解Linux内核的工作原理,以及它如何在不同的环境...

    Linux2.4.22 Understanding Linux

    Linux2.4.22 Understanding Linux

    Understanding Linux Network Internals

    【内容简介】 Linux如此的流行正是得益于它的特性丰富并有效的网络协议栈。如果你曾经惊叹于Linux能够实现如此复杂的工作,或者你只是想通过现实中的例子学习现代网络,《深入理解Linux网络内幕》将会给你指导。...

    Understanding Linux Kernel

    标题“Understanding Linux Kernel”指出了文档讨论的主题是Linux内核的理解。Linux内核是操作系统的核心部分,负责管理硬件资源,提供软件运行环境,以及执行文件管理等重要任务。描述中的“操作系统”是对标题的...

    Understanding the Linux Kernel 3rd Edition

    该压缩包中的文件"OReilly.Understanding.the.Linux.Kernel.3rd.Edition.Nov.2005.HAPPY.NEW.YEAR.chm"是这本书的电子版,采用CHM(Compiled HTML Help)格式,便于在电脑上离线阅读。CHM文件是一种包含HTML页面和...

    Understanding the Linux 2.6.8.1 CPU Scheduler(中文翻译)

    深入理解 Linux 2.6.8.1 CPU 调度器是理解操作系统核心机制的关键部分。Linux 内核负责管理系统的资源,其中最核心的就是 CPU 时间的分配,这直接影响到系统性能和响应速度。调度器的主要职责是决定哪个进程或线程...

    Understanding the Linux Kernel 3rd Ed_linux文件系统_linux_Kernel_

    进程调度是内核的另一个关键任务,负责决定哪个进程应当获得CPU时间片,以及如何在多个进程中公平地分配资源。 信号是进程间通信的一种方式,它们允许一个进程通知另一个进程发生了特定事件。Linux支持多种信号,如...

    Understanding The Linux Kernel 完美组合

    Understanding The Linux Kernel 完美组合 Understanding The Linux Kernel 完美组合 Understanding The Linux Kernel 完美组合

    Understanding linux kernel 3rd.pdf

    研究linux kernel的经典书籍,简直是个神话的书,已经是最新的第三版了,

    Understanding the Linux Kernel (3rd ed) 深入理解Linux内核 英文第3版

    O'Reily出版的又一部经典。Linux作为最为成功的开源操作系统,其精巧的内核设计也一直为操作系统开发者津津乐道。如果你想彻底地理解Linux工作原理,你不得不仔细地研究Linux内核,那么这本书就是极佳的参考资料。

    《深入理解Linux网络技术内幕》(英文版).pdf

    《深入理解Linux网络技术内幕》(英文版...-or if you just want to learn about modern networking through real-life examples--Understanding Linux Network Internals is for you. Like the popular O'Reilly book,

    深入理解LINUX网络技术内幕 英文版 chm 经典 Understanding Linux Network Internals

    深入理解LINUX网络技术内幕 英文版 chm 经典 Understanding Linux Network Internals

Global site tag (gtag.js) - Google Analytics