`
astroxl
  • 浏览: 54266 次
  • 性别: Icon_minigender_1
  • 来自: 深圳
社区版块
存档分类
最新评论

Linux TCP Tuning

阅读更多

reference: http://fasterdata.es.net/TCP-tuning/linux.html

 

Linux TCP Tuning

There are a lot of differences between Linux version 2.4 and 2.6, so first we'll cover the tuning issues that are the same in both 2.4 and 2.6. To change TCP settings in, you add the entries below to the file /etc/sysctl.conf, and then run "sysctl -p".

Like all operating systems, the default maximum Linux TCP buffer sizes are way too small. I suggest changing them to the following settings:

  # increase TCP max buffer size setable using setsockopt()
  net.core.rmem_max = 16777216
  net.core.wmem_max = 16777216
  # increase Linux autotuning TCP buffer limits
  # min, default, and max number of bytes to use
  # set max to at least 4MB, or higher if you use very high BDP paths
  net.ipv4.tcp_rmem = 4096 87380 16777216 
  net.ipv4.tcp_wmem = 4096 65536 16777216

You should also verify that the following are all set to the default value of 1

  sysctl net.ipv4.tcp_window_scaling 
  sysctl net.ipv4.tcp_timestamps 
  sysctl net.ipv4.tcp_sack

Note: you should leave tcp_mem alone. The defaults are fine.

Another thing you can try that may help increase TCP throughput is to increase the size of the interface queue. To do this, do the following:

     ifconfig eth0 txqueuelen 1000

I've seen increases in bandwidth of up to 8x by doing this on some long, fast paths. This is only a good idea for Gigabit Ethernet connected hosts, and may have other side effects such as uneven sharing between multiple streams.

Also, I've been told that for some network paths, using the Linux 'tc' (traffic control) system to pace traffic out of the host can help improve total throughput.


Linux 2.4

Starting with Linux 2.4, Linux has implemented a sender-side autotuning mechanism, so that setting the optimal buffer size on the sender is not needed. This assumes you have set large buffers on the receive side, as the sending buffer will not grow beyond the size of the receive buffer.

However, Linux 2.4 has some other strange behavior that one needs to be aware of. For example: The value for ssthresh for a given path is cached in the routing table. This means that if a connection has has a retransmission and reduces its window, then all connections to that host for the next 10 minutes will use a reduced window size, and not even try to increase its window. The only way to disable this behavior is to do the following before all new connections (you must be root):

       sysctl -w net.ipv4.route.flush=1

More information on various tuning parameters for Linux 2.4 are available in the Ipsysctl tutorial .


Linux 2.6

Starting in Linux 2.6.7 (and back-ported to 2.4.27), linux includes alternative congestion control algorithms beside the traditional 'reno' algorithm. These are designed to recover quickly from packet loss on high-speed WANs.

Linux 2.6 also includes and both send and receiver-side automatic buffer tuning (up to the maximum sizes specified above). There is also a setting to fix the ssthresh caching weirdness described above.

There are a couple additional sysctl settings for 2.6:

   # don't cache ssthresh from previous connection
   net.ipv4.tcp_no_metrics_save = 1
   net.ipv4.tcp_moderate_rcvbuf = 1
   # recommended to increase this for 1000 BT or higher
   net.core.netdev_max_backlog = 2500
   # for 10 GigE, use this
   # net.core.netdev_max_backlog = 30000   

Starting with version 2.6.13, Linux supports pluggable congestion control algorithms . The congestion control algorithm used is set using the sysctl variable net.ipv4.tcp_congestion_control , which is set to cubic or reno by default, depending on which version of the 2.6 kernel you are using.

To get a list of congestion control algorithms that are available in your kernel, run:

   sysctl net.ipv4.tcp_available_congestion_control

The choice of congestion control options is selected when you build the kernel. The following are some of the options are available in the 2.6.23 kernel:

  • reno: Traditional TCP used by almost all other OSes. (default)
  • cubic : CUBIC-TCP (NOTE: There is a cubic bug in the Linux 2.6.18 kernel. Use 2.6.19 or higher!)
  • bic : BIC-TCP
  • htcp : Hamilton TCP
  • vegas : TCP Vegas
  • westwood : optimized for lossy networks

For very long fast paths, I suggest trying cubic or htcp if reno is not is not performing as desired. To set this, do the following:

 
	sysctl -w net.ipv4.tcp_congestion_control=htcp

More information on each of these algorithms and some results can be found here .

More information on tuning parameters and defaults for Linux 2.6 are available in the file ip-sysctl.txt , which is part of the 2.6 source distribution.

Warning on Large MTUs: If you have configured your Linux host to use 9K MTUs, but the connection is using 1500 byte packets, then you actually need 9/1.5 = 6 times more buffer space in order to fill the pipe. In fact some device drivers only allocate memory in power of two sizes, so you may even need 16/1.5 = 11 times more buffer space!

And finally a warning for both 2.4 and 2.6: for very large BDP paths where the TCP window is > 20 MB, you are likely to hit the Linux SACK implementation problem. If Linux has too many packets in flight when it gets a SACK event, it takes too long to located the SACKed packet, and you get a TCP timeout and CWND goes back to 1 packet. Restricting the TCP buffer size to about 12 MB seems to avoid this problem, but clearly limits your total throughput. Another solution is to disable SACK.


Linux 2.2

If you are still running Linux 2.2, upgrade! If this is not possible, add the following to /etc/rc.d/rc.local

   echo 8388608 > /proc/sys/net/core/wmem_max  
   echo 8388608 > /proc/sys/net/core/rmem_max
   echo 65536 > /proc/sys/net/core/rmem_default
   echo 65536 > /proc/sys/net/core/wmem_default
分享到:
评论

相关推荐

    tcp tuning

    # TCP/IP Tuning知识点详解 ## 一、TCP/IP调优概述 ### 1.1 TCP/IP调优的重要性 在高带宽广域网络中,TCP/IP协议的性能调优对于实现高效的数据传输至关重要。随着网络技术的发展和应用需求的增长,优化TCP/IP参数...

    Linux Performance and Tuning Guidelines.pdf

    网络子系统章节则阐述了网络协议栈的实现,包括TCP/IP协议、网络数据包卸载(Offload)和网络绑定(Bonding)模块。对于需要高性能网络传输的场景,这些内容非常有帮助。 在理解Linux性能指标章节中,文档详述了...

    Linux Performance and Tuning Guidelines

    《Linux性能优化与调优指南》是IBM发布的一份红皮书,专注于提供关于Linux系统的性能分析和调优策略。这份指南深入浅出地讲解了如何最大化利用Linux系统的硬件资源,提升系统运行效率,适用于系统管理员、开发人员...

    Linux Performance and Tuning Guidelines (IBM 原版英文书) Linux 性能调优

    - 包括网络实现方式、TCP/IP协议栈、卸载技术以及bonding模块等。这些组件共同协作,实现了高效稳定的网络通信能力。 **1.6 Linux性能指标理解** - **1.6.1 处理器指标** - 如CPU利用率、空闲时间百分比等,用于...

    Linux network tuning guide for AMD EPYC

    首先,Linux® Network Tuning Guide for AMD EPYC™ Processor Based Servers是一份技术手册,它提供了针对基于AMD EPYC处理器的服务器的网络调优建议。这份指南由AMD发布,旨在帮助用户掌握如何通过调整系统参数来...

    Tuning TCP under Linux.pdf

    在Linux操作系统中,TCP(传输控制协议)的调整是提高网络性能的关键环节。TCP参数的优化可以显著提升数据传输效率,特别是在高负载或长距离网络环境中。本篇报告主要探讨了在Ubuntu Linux环境下如何对TCP进行调优,...

    Linux Debugging and Performance Tuning Tips and Techniques.rar

    "Linux Debugging and Performance Tuning Tips and Techniques"这个压缩包文件显然是为了帮助用户掌握这些技能而设计的。这里,我们将深入探讨其中可能包含的一些关键知识点。 **Linux Debugging** 1. **日志分析*...

    Linux Debugging and Performance Tuning.rar

    《Linux调试与性能优化》是针对Linux系统进行故障排查和性能提升的重要参考资料。...文档《Linux Debugging and Performance Tuning》将详细讲解这些内容,对于系统管理员、开发者和Linux爱好者来说是一份宝贵的资源。

    Linux Performance and Tuning Guidelines + linux内核测试指南

    - **网络优化**:调整TCP/IP参数,如减少延迟、增大缓冲区大小等。 - **进程和线程管理**:合理控制并发度,避免过多的上下文切换。 7. **内核配置**: 对于特定的工作负载,定制内核配置可以显著提升性能。移除...

    javasnmp源码-tcp-tuning:LinuxTCP调优

    Linux 系统的最大进程数和最大文件打开数限制: vi /etc/security/limits.conf # 添加如下的行 * soft noproc 65535 * hard noproc 65525 * soft nofile 1000000 * hard nofile 1000000 说明:* 代表针对所有用户 ...

    linux performance and tuning guidelines redbook_IBM

    IBM发布的红皮书《Linux Performance and Tuning Guidelines》是一本专注于Linux系统性能优化和调整的指南。红皮书为系统管理员、开发人员和IT专业人员提供了深入的操作系统调优方法、性能监控工具和性能分析知识。...

    Redhat Enterprise Linux7 Performance Tuning Guide

    网络子系统的性能优化也是一个重要的方面,涉及到了诸如网络缓冲区大小、TCP/IP参数设置等调整。合理的网络配置可以减少延迟,增加吞吐量,保证企业级应用如数据库和大文件传输的顺畅。 在内核参数调整方面,文档会...

    Linux Performance and Tuning Guidelines.rar_Guidelines_linux_per

    《Linux Performance and Tuning Guidelines》是一本专注于Linux系统性能优化的专业指南,旨在帮助读者深入理解如何评估、调整和提升Linux系统的运行效率。该压缩包包含的PDF文档详细阐述了Linux性能调优的各种策略...

    Red_Hat_Enterprise_Linux-7-Performance_Tuning_Guide-en-US.pdf

    - **网络栈优化**:涵盖如何优化 TCP/IP 栈以获得更好的吞吐量和更低的延迟。 - **NIC 团队和负载均衡**:讨论 NIC 团队技术和负载均衡策略的选择与实施方法。 - **网络故障排除**:提供一系列用于诊断网络问题的...

    Red Hat Enterprise Linux 6 Performance Tuning Guide

    6. **网络配置**:针对网络传输的优化,包括调整TCP/IP栈参数,如TCP窗口大小、重传超时、拥塞控制算法等,可以提高网络吞吐量和降低延迟。 7. **硬件适配**:根据服务器硬件配置进行特定的调优,如针对多核CPU的...

    Red Hat Enterprise Linux Network Performance Tuning Guide.pdf.docx

    文档《Red Hat Enterprise Linux Network Performance Tuning Guide》由Jamie Bainbridge和Jon Maxwell撰写,Noah Davids审阅,Dayle Parker和Chris Negus编辑,发布日期为2015年3月25日。它旨在引导读者了解Linux...

    Low Latency Performance Tuning Guide for Red Hat Enterprise Linux 6

    文档还列出了一系列用于网络调优的关键参数,如TCP窗口大小、缓冲区大小等,通过调整这些参数可以有效降低网络延迟。 ##### ethtool工具 `ethtool`是一款非常实用的网络设备管理和查询工具,它可以用来查看和设置...

    linux 性能调优 linux performance truning

    【Linux性能调优 Linux Performance Tuning】是一本实用的手册,专注于解决Linux系统中的性能问题和日常调优工作。本书适合运维人员以及对Linux性能优化感兴趣的IT从业者使用。书中涵盖了一系列操作系统调优方法、...

    Tuning and Customizing a Linux System.rar

    3. **网络调优**:包括TCP/IP栈参数调整、网络接口配置和路由策略优化。例如,修改net.core.somaxconn以增加并发连接数,或使用ethtool工具调整网卡驱动参数以优化网络性能。 4. **系统服务与启动优化**:通过调整...

    Linux_Debugging_and_Performance_Tuning_Tips_and_Techniques.rar

    "Linux_Debugging_and_Performance_Tuning_Tips_and_Techniques.rar"这个压缩包文件显然包含了一个关于这个主题的资源,如".chm"文件,这通常是一个Windows帮助文档,里面可能包含了丰富的教程和指南。 Linux调试...

Global site tag (gtag.js) - Google Analytics