- 浏览: 980055 次
文章分类
- 全部博客 (428)
- Hadoop (2)
- HBase (1)
- ELK (1)
- ActiveMQ (13)
- Kafka (5)
- Redis (14)
- Dubbo (1)
- Memcached (5)
- Netty (56)
- Mina (34)
- NIO (51)
- JUC (53)
- Spring (13)
- Mybatis (17)
- MySQL (21)
- JDBC (12)
- C3P0 (5)
- Tomcat (13)
- SLF4J-log4j (9)
- P6Spy (4)
- Quartz (12)
- Zabbix (7)
- JAVA (9)
- Linux (15)
- HTML (9)
- Lucene (0)
- JS (2)
- WebService (1)
- Maven (4)
- Oracle&MSSQL (14)
- iText (11)
- Development Tools (8)
- UTILS (4)
- LIFE (8)
最新评论
-
Donald_Draper:
Donald_Draper 写道刘落落cici 写道能给我发一 ...
DatagramChannelImpl 解析三(多播) -
Donald_Draper:
刘落落cici 写道能给我发一份这个类的源码吗Datagram ...
DatagramChannelImpl 解析三(多播) -
lyfyouyun:
请问楼主,执行消息发送的时候,报错:Transport sch ...
ActiveMQ连接工厂、连接详解 -
ezlhq:
关于 PollArrayWrapper 状态含义猜测:参考 S ...
WindowsSelectorImpl解析一(FdMap,PollArrayWrapper) -
flyfeifei66:
打算使用xmemcache作为memcache的客户端,由于x ...
Memcached分布式客户端(Xmemcached)
上一篇说了Redis的配置安装,这篇来看一下Redis配置文件
# Redis configuration file example. # # Note that in order to read the configuration file, Redis must be # started with the file path as first argument: # redis 启动命令,后面跟的是配置文件路径 # ./redis-server /path/to/redis.conf # Note on units: when memory size is needed, it is possible to specify # it in the usual form of 1k 5GB 4M and so forth: #内存单位大小 # 1k => 1000 bytes # 1kb => 1024 bytes # 1m => 1000000 bytes # 1mb => 1024*1024 bytes # 1g => 1000000000 bytes # 1gb => 1024*1024*1024 bytes # # units are case insensitive so 1GB 1Gb 1gB are all the same. ################################## INCLUDES ################################### # Include one or more other config files here. This is useful if you # have a standard template that goes to all Redis servers but also need # to customize a few per-server settings. Include files can include # other files, so use this wisely. # # Notice option "include" won't be rewritten by command "CONFIG REWRITE" # from admin or Redis Sentinel. Since Redis always uses the last processed # line as value of a configuration directive, you'd better put includes # at the beginning of this file to avoid overwriting config change at runtime. # # If instead you are interested in using includes to override configuration # options, it is better to use include as the last line. # 当有多个配置文件时,在此配置,下面的配置文件,会覆盖上面的配置 # include /path/to/local.conf # include /path/to/other.conf ################################ GENERAL ##################################### # By default Redis does not run as a daemon. Use 'yes' if you need it. # Note that Redis will write a pid file in /var/run/redis.pid when daemonized. ##redis 运行模式,no为前台,yes为后台守护进程 daemonize no # When running daemonized, Redis writes a pid file in /var/run/redis.pid by # default. You can specify a custom pid file location here. ##redis 进程存放位置 pidfile /var/run/redis.pid # Accept connections on the specified port, default is 6379. # If port 0 is specified Redis will not listen on a TCP socket. ##监听端口号 port 6379 # TCP listen() backlog. # # In high requests-per-second environments you need an high backlog in order # to avoid slow clients connections issues. Note that the Linux kernel # will silently truncate it to the value of /proc/sys/net/core/somaxconn so # make sure to raise both the value of somaxconn and tcp_max_syn_backlog # in order to get the desired effect. tcp-backlog 511 # By default Redis listens for connections from all the network interfaces # available on the server. It is possible to listen to just one or multiple # interfaces using the "bind" configuration directive, followed by one or # more IP addresses. # # Examples: # redis 监听ip地址 # bind 192.168.1.100 10.0.0.1 # bind 127.0.0.1 # Specify the path for the Unix socket that will be used to listen for # incoming connections. There is no default, so Redis will not listen # on a unix socket when not specified. # # unixsocket /tmp/redis.sock # unixsocketperm 700 # Close the connection after a client is idle for N seconds (0 to disable) ##客户端允许空闲时间,0为客户端一直保活 timeout 0 # TCP keepalive. # # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence # of communication. This is useful for two reasons: # # 1) Detect dead peers. # 2) Take the connection alive from the point of view of network # equipment in the middle. # # On Linux, the specified value (in seconds) is the period used to send ACKs. # Note that to close the connection the double of the time is needed. # On other kernels the period depends on the kernel configuration. # # A reasonable value for this option is 60 seconds. ##如果timeout不为0,则设置心跳时间默认为60s tcp-keepalive 0 # Specify the server verbosity level. # This can be one of: # debug (a lot of information, useful for development/testing) # verbose (many rarely useful info, but not a mess like the debug level) # notice (moderately verbose, what you want in production probably) # warning (only very important / critical messages are logged) ##日志输出级别 loglevel notice # Specify the log file name. Also the empty string can be used to force # Redis to log on the standard output. Note that if you use standard # output for logging but daemonize, logs will be sent to /dev/null ##日志文件,如果守护进程,则发送到/dev/null logfile /home/redis/logs/log # To enable logging to the system logger, just set 'syslog-enabled' to yes, # and optionally update the other syslog parameters to suit your needs. #系统log # syslog-enabled no # Specify the syslog identity. # syslog-ident redis # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7. # syslog-facility local0 # Set the number of databases. The default database is DB 0, you can select # a different one on a per-connection basis using SELECT <dbid> where # dbid is a number between 0 and 'databases'-1 #数据库的最大数量,默认数据库为0,最大为'databases'-1 databases 16 ################################ SNAPSHOTTING ################################ # # Save the DB on disk: # # save <seconds> <changes> # # Will save the DB if both the given number of seconds and the given # number of write operations against the DB occurred. # # In the example below the behaviour will be to save: # after 900 sec (15 min) if at least 1 key changed # after 300 sec (5 min) if at least 10 keys changed # after 60 sec if at least 10000 keys changed # # Note: you can disable saving completely by commenting out all "save" lines. # # It is also possible to remove all the previously configured save # points by adding a save directive with a single empty string argument # like in the following example: # # save "" ### DB持久化到磁盘的策略,触发持久化的策略,可以有多个,save后面,第一个为时间,第二个为 ### keys改变的数量 save 900 1 save 300 10 save 60 10000 # By default Redis will stop accepting writes if RDB snapshots are enabled # (at least one save point) and the latest background save failed. # This will make the user aware (in a hard way) that data is not persisting # on disk properly, otherwise chances are that no one will notice and some # disaster will happen. # # If the background saving process will start working again Redis will # automatically allow writes again. # # However if you have setup your proper monitoring of the Redis server # and persistence, you may want to disable this feature so that Redis will # continue to work as usual even if there are problems with disk, # permissions, and so forth. ###当RDB快照开启,或后台持久化失败,则停止写请求,如果后台持久化线程恢复工作 ,则redis则可以接受写请求。 ##下面的配置选项是,当后台持久化线程失败时,停止写请求 stop-writes-on-bgsave-error yes # Compress string objects using LZF when dump .rdb databases? # For default that's set to 'yes' as it's almost always a win. # If you want to save some CPU in the saving child set it to 'no' but # the dataset will likely be bigger if you have compressible values or keys. ##当备份数据库时,是否压缩String objects rdbcompression yes # Since version 5 of RDB a CRC64 checksum is placed at the end of the file. # This makes the format more resistant to corruption but there is a performance # hit to pay (around 10%) when saving and loading RDB files, so you can disable it # for maximum performances. # # RDB files created with checksum disabled have a checksum of zero that will # tell the loading code to skip the check. ##是否检验CRC64 rdbchecksum yes ##数据库备份文件 # The filename where to dump the DB dbfilename dump.rdb # The working directory. # # The DB will be written inside this directory, with the filename specified # above using the 'dbfilename' configuration directive. # # The Append Only File will also be created inside this directory. # # Note that you must specify a directory here, not a file name. ###Redis 工作目录,data目录 dir ./ ################################# REPLICATION ################################# # Master-Slave replication. Use slaveof to make a Redis instance a copy of # another Redis server. A few things to understand ASAP about Redis replication. # # 1) Redis replication is asynchronous, but you can configure a master to # stop accepting writes if it appears to be not connected with at least # a given number of slaves. # redis复制异步的,为了保证数据的一致性,必须配置,当master连不到slave时, # master 停止写请求 # 2) Redis slaves are able to perform a partial resynchronization with the # master if the replication link is lost for a relatively small amount of # time. You may want to configure the replication backlog size (see the next # sections of this file) with a sensible value depending on your needs. # 我们可以通过配置replication backlog size,来避免slave与master短时间失去连接的 # 数据不一致问题 # 3) Replication is automatic and does not need user intervention. After a # network partition slaves automatically try to reconnect to masters # and resynchronize with them. # Master ip和port配置 # slaveof <masterip> <masterport> # If the master is password protected (using the "requirepass" configuration # directive below) it is possible to tell the slave to authenticate before # starting the replication synchronization process, otherwise the master will # refuse the slave request. # ## master 验证 # masterauth <master-password> # When a slave loses its connection with the master, or when the replication # is still in progress, the slave can act in two different ways: # # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will # still reply to client requests, possibly with out of date data, or the # data set may just be empty if this is the first synchronization. # # 2) if slave-serve-stale-data is set to 'no' the slave will reply with # an error "SYNC with master in progress" to all the kind of commands # but to INFO and SLAVEOF. # ## 当save与master失去连接时,slave-serve-stale-data设置为yes时,slave继续接受client的 请求,但返回的数据,可能是已过时的数据或空;设置为no是,slave返回客户端的信息为 SYNC with master in progress,INFO and SLAVEOF命令除外 slave-serve-stale-data yes # You can configure a slave instance to accept writes or not. Writing against # a slave instance may be useful to store some ephemeral data (because data # written on a slave will be easily deleted after resync with the master) but # may also cause problems if clients are writing to it because of a # misconfiguration. # # Since Redis 2.6 by default slaves are read-only. # # Note: read only slaves are not designed to be exposed to untrusted clients # on the internet. It's just a protection layer against misuse of the instance. # Still a read only slave exports by default all the administrative commands # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve # security of read only slaves using 'rename-command' to shadow all the # administrative / dangerous commands. ##slave 读写状态,slave可以设置为可读写,但,写只是对于临时数据 slave-read-only yes # Replication SYNC strategy: disk or socket. # # ------------------------------------------------------- # WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY # ------------------------------------------------------- # # New slaves and reconnecting slaves that are not able to continue the replication # process just receiving differences, need to do what is called a "full # synchronization". An RDB file is transmitted from the master to the slaves. # The transmission can happen in two different ways: # # 1) Disk-backed: The Redis master creates a new process that writes the RDB # file on disk. Later the file is transferred by the parent # process to the slaves incrementally. # 2) Diskless: The Redis master creates a new process that directly writes the # RDB file to slave sockets, without touching the disk at all. # # With disk-backed replication, while the RDB file is generated, more slaves # can be queued and served with the RDB file as soon as the current child producing # the RDB file finishes its work. With diskless replication instead once # the transfer starts, new slaves arriving will be queued and a new transfer # will start when the current one terminates. # # When diskless replication is used, the master waits a configurable amount of # time (in seconds) before starting the transfer in the hope that multiple slaves # will arrive and the transfer can be parallelized. # # With slow disks and fast (large bandwidth) networks, diskless replication # works better. ## 当slave重新连接到master,master是通过新建进程传输db文件,还是通过socket,写 rdb files,no为新建进程传输db文件,yes为通过socket传输rdb files文件 repl-diskless-sync no # When diskless replication is enabled, it is possible to configure the delay # the server waits in order to spawn the child that transfers the RDB via socket # to the slaves. # # This is important since once the transfer starts, it is not possible to serve # new slaves arriving, that will be queued for the next RDB transfer, so the server # waits a delay in order to let more slaves arrive. # # The delay is specified in seconds, and by default is 5 seconds. To disable # it entirely just set it to 0 seconds and the transfer will start ASAP. ## repl-diskless-sync的同步策略延时,时间 repl-diskless-sync-delay 5 # Slaves send PINGs to server in a predefined interval. It's possible to change # this interval with the repl_ping_slave_period option. The default value is 10 # seconds. # Slave与Master的心跳时间 # repl-ping-slave-period 10 # The following option sets the replication timeout for: # # 1) Bulk transfer I/O during SYNC, from the point of view of slave. # 2) Master timeout from the point of view of slaves (data, pings). # 3) Slave timeout from the point of view of masters (REPLCONF ACK pings). # # It is important to make sure that this value is greater than the value # specified for repl-ping-slave-period otherwise a timeout will be detected # every time there is low traffic between the master and the slave. # 心跳恢复超时时间 # repl-timeout 60 # Disable TCP_NODELAY on the slave socket after SYNC? # # If you select "yes" Redis will use a smaller number of TCP packets and # less bandwidth to send data to slaves. But this can add a delay for # the data to appear on the slave side, up to 40 milliseconds with # Linux kernels using a default configuration. # # If you select "no" the delay for data to appear on the slave side will # be reduced but more bandwidth will be used for replication. # # By default we optimize for low latency, but in very high traffic conditions # or when the master and slaves are many hops away, turning this to "yes" may # be a good idea. ## repl-disable-tcp-nodelay用于master向slave发送同步数据的策略,当为yes是,master 将同步数据分为一些tcp小包发送到slave,这种是针对带宽低的情况,当master与slave之间的带宽较高 时,我们可以设置yes,master到slave的同步的延时时间将会减少 repl-disable-tcp-nodelay no # Set the replication backlog size. The backlog is a buffer that accumulates # slave data when slaves are disconnected for some time, so that when a slave # wants to reconnect again, often a full resync is not needed, but a partial # resync is enough, just passing the portion of data the slave missed while # disconnected. # # The bigger the replication backlog, the longer the time the slave can be # disconnected and later be able to perform a partial resynchronization. # # The backlog is only allocated once there is at least a slave connected. #此选项用于将当slave与master失去连接,replication backlog 的大小,当size越大, slave可以被允许与master失去连接的时间越长,replication backlog文件用于当salve与master 失去连接时的数据 # repl-backlog-size 1mb # After a master has no longer connected slaves for some time, the backlog # will be freed. The following option configures the amount of seconds that # need to elapse, starting from the time the last slave disconnected, for # the backlog buffer to be freed. # # A value of 0 means to never release the backlog. # 当master没有slave连接是,擦除repl-backlog的时间,个人认为这个时间为延时时间 # repl-backlog-ttl 3600 # The slave priority is an integer number published by Redis in the INFO output. # It is used by Redis Sentinel in order to select a slave to promote into a # master if the master is no longer working correctly. # # A slave with a low priority number is considered better for promotion, so # for instance if there are three slaves with priority 10, 100, 25 Sentinel will # pick the one with priority 10, that is the lowest. # # However a special priority of 0 marks the slave as not able to perform the # role of master, so a slave with priority of 0 will never be selected by # Redis Sentinel for promotion. # # By default the priority is 100. ## salve的优先级,当redis的Sentinel集群中的master,由于宕机等其他原因不能工作时, Sentinel会选择集群中slave优先级低的为master,当slave为0时,表示slave永远不会被选择为master slave-priority 100 # It is possible for a master to stop accepting writes if there are less than # N slaves connected, having a lag less or equal than M seconds. # # The N slaves need to be in "online" state. # # The lag in seconds, that must be <= the specified value, is calculated from # the last ping received from the slave, that is usually sent every second. # # This option does not GUARANTEE that N replicas will accept the write, but # will limit the window of exposure for lost writes in case not enough slaves # are available, to the specified number of seconds. # # For example to require at least 3 slaves with a lag <= 10 seconds use: #这两个选项用于设置当Sentinel中,Master接受写请求的情况,下面的配置含义为,Master在 集群中至少有3个slave在线,同时master与salve上次握手的最大时间小于10s,才接受读请求,这样做 的目的,是确保没有足够的slave可用的情况下,Sentinel集群丢失的数据最小 # min-slaves-to-write 3 # min-slaves-max-lag 10 # # Setting one or the other to 0 disables the feature. # 当min-slaves-max-lag为0时,代表配置项失效 # By default min-slaves-to-write is set to 0 (feature disabled) and # min-slaves-max-lag is set to 10. ################################## SECURITY ################################### # Require clients to issue AUTH <PASSWORD> before processing any other # commands. This might be useful in environments in which you do not trust # others with access to the host running redis-server. # # This should stay commented out for backward compatibility and because most # people do not need auth (e.g. they run their own servers). # # Warning: since Redis is pretty fast an outside user can try up to # 150k passwords per second against a good box. This means that you should # use a very strong password otherwise it will be very easy to break. # client 密码验证,最好密码设置复杂一点,因为redis允许每秒150K次密码校验 # requirepass foobared # Command renaming. # # It is possible to change the name of dangerous commands in a shared # environment. For instance the CONFIG command may be renamed into something # hard to guess so that it will still be available for internal-use tools # but not available for general clients. # # Example: # # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 # # It is also possible to completely kill a command by renaming it into # an empty string: # # rename-command CONFIG "" # # Please note that changing the name of commands that are logged into the # AOF file or transmitted to slaves may cause problems. ##有时候我们为了避免外部人士,使用一些命令,可以重命名一些命令,但是重名命令会写到 AOF文件中,或传输给slaves,可能导致一些位置问题 ################################### LIMITS #################################### # Set the max number of connected clients at the same time. By default # this limit is set to 10000 clients, however if the Redis server is not # able to configure the process file limit to allow for the specified limit # the max number of allowed clients is set to the current file limit # minus 32 (as Redis reserves a few file descriptors for internal uses). # # Once the limit is reached Redis will close all the new connections sending # an error 'max number of clients reached'. # 最大连接数 # maxclients 10000 # Don't use more memory than the specified amount of bytes. # When the memory limit is reached Redis will try to remove keys # according to the eviction policy selected (see maxmemory-policy). # # If Redis can't remove keys according to the policy, or if the policy is # set to 'noeviction', Redis will start to reply with errors to commands # that would use more memory, like SET, LPUSH, and so on, and will continue # to reply to read-only commands like GET. # # This option is usually useful when using Redis as an LRU cache, or to set # a hard memory limit for an instance (using the 'noeviction' policy). # # WARNING: If you have slaves attached to an instance with maxmemory on, # the size of the output buffers needed to feed the slaves are subtracted # from the used memory count, so that network problems / resyncs will # not trigger a loop where keys are evicted, and in turn the output # buffer of slaves is full with DELs of keys evicted triggering the deletion # of more keys, and so forth until the database is completely emptied. # # In short... if you have slaves attached it is suggested that you set a lower # limit for maxmemory so that there is some free RAM on the system for slave # output buffers (but this is not needed if the policy is 'noeviction'). #最大可用内存 # maxmemory <bytes> # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory # is reached. You can select among five behaviors: # # volatile-lru -> remove the key with an expire set using an LRU algorithm # allkeys-lru -> remove any key according to the LRU algorithm # volatile-random -> remove a random key with an expire set # allkeys-random -> remove a random key, any key # volatile-ttl -> remove the key with the nearest expire time (minor TTL) # noeviction -> don't expire at all, just return an error on write operations # # Note: with any of the above policies, Redis will return an error on write # operations, when there are no suitable keys for eviction. # # At the date of writing these commands are: set setnx setex append # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby # getset mset msetnx exec sort # # The default is: # 当redis内存达到最大限制是,内存回收策略noeviction是不回收,可读不可写; volatile-lru:移除带expire时间最少使用的key allkeys-lru :根据LRU策略移除all key volatile-random: 随机移除 key with an expire set allkeys-random: remove a random key, any key volatile-ttl: remove the key with the nearest expire time (minor TTL) # maxmemory-policy noeviction # LRU and minimal TTL algorithms are not precise algorithms but approximated # algorithms (in order to save memory), so you can tune it for speed or # accuracy. For default Redis will check five keys and pick the one that was # used less recently, you can change the sample size using the following # configuration directive. # # The default of 5 produces good enough results. 10 Approximates very closely # true LRU but costs a bit more CPU. 3 is very fast but not very accurate. # LRU and minimal TTL 算法为了节省内存,策略上近乎实际算法,与实际的LRU和TTL还是 有差距;LRU and minimal TTL默认检查5边keys,找出LRU keys,当然我们可以通过 maxmemory-samples来调整key检查次数,10次的算法策略与实际算法比较接近,但可能会 消耗更多的CPU # maxmemory-samples 5 ############################## APPEND ONLY MODE ############################### # By default Redis asynchronously dumps the dataset on disk. This mode is # good enough in many applications, but an issue with the Redis process or # a power outage may result into a few minutes of writes lost (depending on # the configured save points). # # The Append Only File is an alternative persistence mode that provides # much better durability. For instance using the default data fsync policy # (see later in the config file) Redis can lose just one second of writes in a # dramatic event like a server power outage, or a single write if something # wrong with the Redis process itself happens, but the operating system is # still running correctly. # # AOF and RDB persistence can be enabled at the same time without problems. # If the AOF is enabled on startup Redis will load the AOF, that is the file # with the better durability guarantees. # # Please check http://redis.io/topics/persistence for more information. ### 默认情况下,redis需要异步dump数据到磁盘上,但这种情况下redis进程可能导致 部分写丢失,同时断电也会导致部分写丢失,为了避免这种情况,我们可以开启appendonly 为yes,更好的保证redis的持久化,当redis启动时,redis会加载AOF文件 appendonly no # The name of the append only file (default: "appendonly.aof") ##AOF文件 appendfilename "appendonly.aof" # The fsync() call tells the Operating System to actually write data on disk # instead of waiting for more data in the output buffer. Some OS will really flush # data on disk, some other OS will just try to do it ASAP. # # Redis supports three different modes: # # no: don't fsync, just let the OS flush the data when it wants. Faster. # always: fsync after every write to the append only log. Slow, Safest. # everysec: fsync only one time every second. Compromise. # # The default is "everysec", as that's usually the right compromise between # speed and data safety. It's up to you to understand if you can relax this to # "no" that will let the operating system flush the output buffer when # it wants, for better performances (but if you can live with the idea of # some data loss consider the default persistence mode that's snapshotting), # or on the contrary, use "always" that's very slow but a bit safer than # everysec. # # More details please check the following article: # http://antirez.com/post/redis-persistence-demystified.html # # If unsure, use "everysec". ## 下面的配置项是设置同步内存数据到磁盘的策略,由于linux fsync函数不同的OS 可能会不模式,所以通过配置同步策略,来控制数据的同步持久化, no:不主动同步持久化,依靠OS策略,优点速度快 always:在每次写数据,并append log时,都同步赤计划,优点数据安全性高,确定慢 everysec:没秒同步持久化一次,是上面两种策略的折衷,默认策略为everysec # appendfsync always appendfsync everysec # appendfsync no # When the AOF fsync policy is set to always or everysec, and a background # saving process (a background save or AOF log background rewriting) is # performing a lot of I/O against the disk, in some Linux configurations # Redis may block too long on the fsync() call. Note that there is no fix for # this currently, as even performing fsync in a different thread will block # our synchronous write(2) call. # # In order to mitigate this problem it's possible to use the following option # that will prevent fsync() from being called in the main process while a # BGSAVE or BGREWRITEAOF is in progress. # # This means that while another child is saving, the durability of Redis is # the same as "appendfsync none". In practical terms, this means that it is # possible to lose up to 30 seconds of log in the worst scenario (with the # default Linux settings). # # If you have latency problems turn this to "yes". Otherwise leave it as # "no" that is the safest pick from the point of view of durability. #当我们设置appendfsync的策略为everysec时,BGSAVE or BGREWRITEAOF线程 会有大量的IO操作,在一些linux操作系统中,redis会阻塞时间较长的fsync操作, 这样的话我们的appendfsync的策略为none,这样可能导致一些数据的丢失,如果 我们想避免这些数据的丢失,可以设置no-appendfsync-on-rewrite为yes no-appendfsync-on-rewrite no # Automatic rewrite of the append only file. # Redis is able to automatically rewrite the log file implicitly calling # BGREWRITEAOF when the AOF log size grows by the specified percentage. # # This is how it works: Redis remembers the size of the AOF file after the # latest rewrite (if no rewrite has happened since the restart, the size of # the AOF at startup is used). # # This base size is compared to the current size. If the current size is # bigger than the specified percentage, the rewrite is triggered. Also # you need to specify a minimal size for the AOF file to be rewritten, this # is useful to avoid rewriting the AOF file even if the percentage increase # is reached but it is still pretty small. # # Specify a percentage of zero in order to disable the automatic AOF # rewrite feature. ##下面这两参数用于当aof文件达到以下auto-aof-rewrite-min-size大小且使用率为 auto-aof-rewrite-percentage时,触发BGREWRITEAOF auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb # An AOF file may be found to be truncated at the end during the Redis # startup process, when the AOF data gets loaded back into memory. # This may happen when the system where Redis is running # crashes, especially when an ext4 filesystem is mounted without the # data=ordered option (however this can't happen when Redis itself # crashes or aborts but the operating system still works correctly). # # Redis can either exit with an error when this happens, or load as much # data as possible (the default now) and start if the AOF file is found # to be truncated at the end. The following option controls this behavior. # # If aof-load-truncated is set to yes, a truncated AOF file is loaded and # the Redis server starts emitting a log to inform the user of the event. # Otherwise if the option is set to no, the server aborts with an error # and refuses to start. When the option is set to no, the user requires # to fix the AOF file using the "redis-check-aof" utility before to restart # the server. # # Note that if the AOF file will be found to be corrupted in the middle # the server will still exit with an error. This option only applies when # Redis will try to read more data from the AOF file but not enough bytes # will be found. ###当redis突然崩溃时,AOF文件有中断时或者truncate时,当aof-load-truncated设置为 yes时,redis尽最大努力从truncated AOF file恢复数据,当为no时,用户重启redis时需要 自己手动用redis-check-aof修复aof文件 aof-load-truncated yes ################################ LUA SCRIPTING ############################### # Max execution time of a Lua script in milliseconds. # # If the maximum execution time is reached Redis will log that a script is # still in execution after the maximum allowed time and will start to # reply to queries with an error. # # When a long running script exceeds the maximum execution time only the # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be # used to stop a script that did not yet called write commands. The second # is the only way to shut down the server in the case a write command was # already issued by the script but the user doesn't want to wait for the natural # termination of the script. # # Set it to 0 or a negative value for unlimited execution without warnings. ##lua 脚本允许执行的时间 lua-time-limit 5000 ################################ REDIS CLUSTER ############################### # # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however # in order to mark it as "mature" we need to wait for a non trivial percentage # of users to deploy it in production. # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # # Normal Redis instances can't be part of a Redis Cluster; only nodes that are # started as cluster nodes can. In order to start a Redis instance as a # cluster node enable the cluster support uncommenting the following: # 启动redis作为簇节点 # cluster-enabled yes # Every cluster node has a cluster configuration file. This file is not # intended to be edited by hand. It is created and updated by Redis nodes. # Every Redis Cluster node requires a different cluster configuration file. # Make sure that instances running in the same system do not have # overlapping cluster configuration file names. # redis 簇节点配置文件,每个簇节点都需要一个配置文件 # cluster-config-file nodes-6379.conf # Cluster node timeout is the amount of milliseconds a node must be unreachable # for it to be considered in failure state. # Most other internal time limits are multiple of the node timeout. # 簇节点心跳时间 # cluster-node-timeout 15000 # A slave of a failing master will avoid to start a failover if its data # looks too old. # # There is no simple way for a slave to actually have a exact measure of # its "data age", so the following two checks are performed: # # 1) If there are multiple slaves able to failover, they exchange messages # in order to try to give an advantage to the slave with the best # replication offset (more data from the master processed). # Slaves will try to get their rank by offset, and apply to the start # of the failover a delay proportional to their rank. # # 2) Every single slave computes the time of the last interaction with # its master. This can be the last ping or command received (if the master # is still in the "connected" state), or the time that elapsed since the # disconnection with the master (if the replication link is currently down). # If the last interaction is too old, the slave will not try to failover # at all. # # The point "2" can be tuned by user. Specifically a slave will not perform # the failover if, since the last interaction with the master, the time # elapsed is greater than: # # (node-timeout * slave-validity-factor) + repl-ping-slave-period # # So for example if node-timeout is 30 seconds, and the slave-validity-factor # is 10, and assuming a default repl-ping-slave-period of 10 seconds, the # slave will not try to failover if it was not able to talk with the master # for longer than 310 seconds. # # A large slave-validity-factor may allow slaves with too old data to failover # a master, while a too small value may prevent the cluster from being able to # elect a slave at all. # # For maximum availability, it is possible to set the slave-validity-factor # to a value of 0, which means, that slaves will always try to failover the # master regardless of the last time they interacted with the master. # (However they'll always try to apply a delay proportional to their # offset rank). # # Zero is the only value able to guarantee that when all the partitions heal # the cluster will always be able to continue. # 此配置为当簇master宕机时,salves等待选举Master时间因子,等待时间的计算公式为: (node-timeout * slave-validity-factor) + repl-ping-slave-period 当cluster-slave-validity-factor越大,意味着允许拥有更旧数据的slaves切换为master, 当cluster-slave-validity-factor越小,意味着集群会阻止master的选择切换 当cluster-slave-validity-factor为0时,slave会不断尝试成为master,以保证簇的可用性 # cluster-slave-validity-factor 10 # Cluster slaves are able to migrate to orphaned masters, that are masters # that are left without working slaves. This improves the cluster ability # to resist to failures as otherwise an orphaned master can't be failed over # in case of failure if it has no working slaves. # # Slaves migrate to orphaned masters only if there are still at least a # given number of other working slaves for their old master. This number # is the "migration barrier". A migration barrier of 1 means that a slave # will migrate only if there is at least 1 other working slave for its master # and so forth. It usually reflects the number of slaves you want for every # master in your cluster. # # Default is 1 (slaves migrate only if their masters remain with at least # one slave). To disable migration just set it to a very large value. # A value of 0 can be set but is useful only for debugging and dangerous # in production. #个人理解,cluster-migration-barrier确保族中slave的数量, 默认为1,当master宕机是确保族是可用的; 0可用于调试 # cluster-migration-barrier 1 # By default Redis Cluster nodes stop accepting queries if they detect there # is at least an hash slot uncovered (no available node is serving it). # This way if the cluster is partially down (for example a range of hash slots # are no longer covered) all the cluster becomes, eventually, unavailable. # It automatically returns available as soon as all the slots are covered again. # # However sometimes you want the subset of the cluster which is working, # to continue to accept queries for the part of the key space that is still # covered. In order to do so, just set the cluster-require-full-coverage # option to no. # 当族中hash slot有一个没有覆盖时,则族节点将停止接受查询请求, 当cluster-require-full-coverage设为yes时,hash slot 必须全覆盖, 当为no,不用全覆盖,族中存在没有覆盖的hash slot,族依然可用提供查询 # cluster-require-full-coverage yes # In order to setup your cluster make sure to read the documentation # available at http://redis.io web site. ################################## SLOW LOG ################################### # The Redis Slow Log is a system to log queries that exceeded a specified # execution time. The execution time does not include the I/O operations # like talking with the client, sending the reply and so forth, # but just the time needed to actually execute the command (this is the only # stage of command execution where the thread is blocked and can not serve # other requests in the meantime). # # You can configure the slow log with two parameters: one tells Redis # what is the execution time, in microseconds, to exceed in order for the # command to get logged, and the other parameter is the length of the # slow log. When a new command is logged the oldest one is removed from the # queue of logged commands. # The following time is expressed in microseconds, so 1000000 is equivalent # to one second. Note that a negative number disables the slow log, while # a value of zero forces the logging of every command. ##redis慢日志操作时间阈值,当大于10000ms时,记录操作到slow log 文件中,当为0时, 记录所有的操作命令 slowlog-log-slower-than 10000 # There is no limit to this length. Just be aware that it will consume memory. # You can reclaim memory used by the slow log with SLOWLOG RESET. ##每条slow log的最大长度,防止消耗过多的内存 slowlog-max-len 128 ################################ LATENCY MONITOR ############################## # The Redis latency monitoring subsystem samples different operations # at runtime in order to collect data related to possible sources of # latency of a Redis instance. # # Via the LATENCY command this information is available to the user that can # print graphs and obtain reports. # # The system only logs operations that were performed in a time equal or # greater than the amount of milliseconds specified via the # latency-monitor-threshold configuration directive. When its value is set # to zero, the latency monitor is turned off. # # By default latency monitoring is disabled since it is mostly not needed # if you don't have latency issues, and collecting data has a performance # impact, that while very small, can be measured under big load. Latency # monitoring can easily be enabled at runtime using the command # "CONFIG SET latency-monitor-threshold <milliseconds>" if needed. #redis 性能监控监控,0表示不开启监控,milliseconds为延时监控时间 latency-monitor-threshold 0 ############################# EVENT NOTIFICATION ############################## # Redis can notify Pub/Sub clients about events happening in the key space. # This feature is documented at http://redis.io/topics/notifications # # For instance if keyspace events notification is enabled, and a client # performs a DEL operation on key "foo" stored in the Database 0, two # messages will be published via Pub/Sub: # # PUBLISH __keyspace@0__:foo del # PUBLISH __keyevent@0__:del foo # # It is possible to select the events that Redis will notify among a set # of classes. Every class is identified by a single character: # # K Keyspace events, published with __keyspace@<db>__ prefix. # E Keyevent events, published with __keyevent@<db>__ prefix. # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... # $ String commands # l List commands # s Set commands # h Hash commands # z Sorted set commands # x Expired events (events generated every time a key expires) # e Evicted events (events generated when a key is evicted for maxmemory) # A Alias for g$lshzxe, so that the "AKE" string means all the events. # # The "notify-keyspace-events" takes as argument a string that is composed # of zero or multiple characters. The empty string means that notifications # are disabled. # # Example: to enable list and generic events, from the point of view of the # event name, use: #Redis Pub/Sub模式下,需要通知的事件 # notify-keyspace-events Elg # # Example 2: to get the stream of the expired keys subscribing to channel # name __keyevent@0__:expired use: # # notify-keyspace-events Ex # # By default all notifications are disabled because most users don't need # this feature and the feature has some overhead. Note that if you don't # specify at least one of K or E, no events will be delivered. notify-keyspace-events "" ############################### ADVANCED CONFIG ############################### ##调优篇 # Hashes are encoded using a memory efficient data structure when they have a # small number of entries, and the biggest entry does not exceed a given # threshold. These thresholds can be configured using the following directives. ##Hashes 最大size hash-max-ziplist-entries 512 hash-max-ziplist-value 64 # Similarly to hashes, small lists are also encoded in a special way in order # to save a lot of space. The special representation is only used when # you are under the following limits: #list最大size list-max-ziplist-entries 512 list-max-ziplist-value 64 # Sets have a special encoding in just one case: when a set is composed # of just strings that happen to be integers in radix 10 in the range # of 64 bit signed integers. # The following configuration setting sets the limit in the size of the # set in order to use this special memory saving encoding. ##set 集合最大size set-max-intset-entries 512 # Similarly to hashes and lists, sorted sets are also specially encoded in # order to save a lot of space. This encoding is only used when the length and # elements of a sorted set are below the following limits: ##zset 最大size zset-max-ziplist-entries 128 zset-max-ziplist-value 64 # HyperLogLog sparse representation bytes limit. The limit includes the # 16 bytes header. When an HyperLogLog using the sparse representation crosses # this limit, it is converted into the dense representation. # # A value greater than 16000 is totally useless, since at that point the # dense representation is more memory efficient. # # The suggested value is ~ 3000 in order to have the benefits of # the space efficient encoding without slowing down too much PFADD, # which is O(N) with the sparse encoding. The value can be raised to # ~ 10000 when CPU is not a concern, but space is, and the data set is # composed of many HyperLogLogs with cardinality in the 0 - 15000 range. ##HyperLogLog 最大size hll-sparse-max-bytes 3000 # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in # order to help rehashing the main Redis hash table (the one mapping top-level # keys to values). The hash table implementation Redis uses (see dict.c) # performs a lazy rehashing: the more operation you run into a hash table # that is rehashing, the more rehashing "steps" are performed, so if the # server is idle the rehashing is never complete and some more memory is used # by the hash table. # # The default is to use this millisecond 10 times every second in order to # actively rehash the main dictionaries, freeing memory when possible. # # If unsure: # use "activerehashing no" if you have hard latency requirements and it is # not a good thing in your environment that Redis can reply from time to time # to queries with 2 milliseconds delay. # # use "activerehashing yes" if you don't have such hard requirements but # want to free memory asap when possible. ## 当则项为yes是,激活重新hash key-value值,以释放内存, activerehashing yes # The client output buffer limits can be used to force disconnection of clients # that are not reading data from the server fast enough for some reason (a # common reason is that a Pub/Sub client can't consume messages as fast as the # publisher can produce them). # # The limit can be set differently for the three different classes of clients: # # normal -> normal clients including MONITOR clients # slave -> slave clients # pubsub -> clients subscribed to at least one pubsub channel or pattern # # The syntax of every client-output-buffer-limit directive is the following: #当client不能够从server读取数据的情况下( Pub/Sub模式下,消费者不能尽可能的消费 生产者产生消息),server断开client的模式有三种normal,slave,pubsub # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds> # # A client is immediately disconnected once the hard limit is reached, or if # the soft limit is reached and remains reached for the specified number of # seconds (continuously). # So for instance if the hard limit is 32 megabytes and the soft limit is # 16 megabytes / 10 seconds, the client will get disconnected immediately # if the size of the output buffers reach 32 megabytes, but will also get # disconnected if the client reaches 16 megabytes and continuously overcomes # the limit for 10 seconds. # # By default normal clients are not limited because they don't receive data # without asking (in a push way), but just after a request, so only # asynchronous clients may create a scenario where data is requested faster # than it can read. # # Instead there is a default limit for pubsub and slave clients, since # subscribers and slaves receive data in a push fashion. # # Both the hard or the soft limit can be disabled by setting them to zero. ## 当<hard limit> <soft limit> <soft seconds>为0时,server不会断开client连接, 针对client连接 client-output-buffer-limit normal 0 0 0 ##当client为slave时,server缓存中有256mb数据, 但是slave不能在60m内读取64mb的数据时断开连接 client-output-buffer-limit slave 256mb 64mb 60 ##当client为Pub/Sub模式下的Sub时,Pub/server缓存中有32mb数据, 但是Sub/client不能在60m内读取8mb的数据时断开连接 client-output-buffer-limit pubsub 32mb 8mb 60 # Redis calls an internal function to perform many background tasks, like # closing connections of clients in timeout, purging expired keys that are # never requested, and so forth. # # Not all tasks are performed with the same frequency, but Redis checks for # tasks to perform according to the specified "hz" value. # # By default "hz" is set to 10. Raising the value will use more CPU when # Redis is idle, but at the same time will make Redis more responsive when # there are many keys expiring at the same time, and timeouts may be # handled with more precision. # # The range is between 1 and 500, however a value over 100 is usually not # a good idea. Most users should use the default of 10 and raise this up to # 100 only in environments where very low latency is required. ## Redis 多久调用内部的后台任务线程,如关闭超时连接,清除过期的keys等,默认为10 值太大,任务线程执行次数越多,将会消耗更多的CPU hz 10 # When a child rewrites the AOF file, if the following option is enabled # the file will be fsync-ed every 32 MB of data generated. This is useful # in order to commit the file to the disk more incrementally and avoid # big latency spikes. ##子设置的作用是,当AOF file每增加32MB,就将数据同步持久化到磁盘 aof-rewrite-incremental-fsync yes
发表评论
-
Spring与Redis的集成详解二
2016-12-26 11:36 1846Jedis获取Redis连接详解:http://donald- ... -
Spring与Redis的集成详解一
2016-12-26 10:32 3298Jedis获取Redis连接详解:http://donald- ... -
Redis的客户端Jedis及Jedis操作Redis命令详解
2016-12-25 14:15 10427Jedis获取Redis连接详解:http://donald- ... -
Jedis获取Redis连接详解
2016-12-24 17:16 5728Jedis操作Redis :http://donald-dra ... -
Spring与Redis的集成
2016-12-23 16:40 2654springmvc整合redis架构搭建实例:http://w ... -
Jedis操作Redis
2016-12-23 14:41 2255Redis各特性的应用场景:http://www.cnblog ... -
Redis主从,读写分离、HA配置
2016-12-21 20:24 1096Redis的安装与配置:http://donald-drape ... -
Reid高可用Sentinel配置文件详解
2016-12-21 18:17 1799Redis 的 Sentinel 文档:http://www. ... -
Redis 发布订阅,事务,备份恢复,监控
2016-12-21 10:40 647Redis 发布订阅 ##在 ... -
Redis数据类型操作命令
2016-12-20 18:45 461Redis教程: http://www.runoob.com/ ... -
Redis日志、数据目录、AOF、AUTH配置
2016-12-20 16:04 1542简单粗暴的Redis数据备份和恢复方法:http://www. ... -
Redis启动连接基准测试
2016-12-20 15:04 675基于Redis Sentinel的Redis集群(主从& ... -
Redis的安装与配置
2016-12-19 18:31 967CentOS6.4安装配置redis:http://www.c ...
相关推荐
### Redis配置文件详解 Redis是一种开源的键值存储系统,具备高性能的特点,通常被当作内存存储系统或内存数据库使用。由于其支持多种复杂的数据结构,也被视为一种数据结构服务器。Redis配置文件`redis.conf`控制...
redis.conf配置文件详解
以下是对Redis配置文件中关键参数的详细解释: 1. **守护进程模式** (`daemonize`): 默认情况下,Redis不是以守护进程的方式运行,可以通过设置`daemonize yes`来启用,这样Redis会在后台运行,不占用终端。 2. **...
redis.conf Redis配置文件 下载 redis.conf 配置详解 Redis配置文件redis.conf 详解1.基本配置内存单位的表示# 1k => 1000 bytes# ...Redis 的详细介绍Redis 的下载地址
Redis 配置文件详解 Redis 配置文件是 Redis 服务器的核心配置文件,用于设置 Redis 服务器的各种参数,影响着 Redis 服务器的性能、安全性和可靠性。下面是对 Redis 配置文件的详细解释: 1. daemonize:指定 ...
本文将深入解读Redis配置文件中的关键参数及其作用,帮助读者更好地理解和优化Redis配置。 #### 1. 守护进程模式 (daemonize) - **默认状态**:Redis默认不以守护进程模式运行,即在前台运行。 - **配置说明**:...
### Redis配置文件详解 #### 一、概述 Redis是一款开源的键值对存储系统,以其高性能和灵活性著称。为了更好地管理和配置Redis实例,理解其配置文件中的各项参数至关重要。本文将详细介绍`redis配置文件说明详解....
Redis配置文件通常为`redis.conf`,它是Redis服务器启动时读取的主要配置文件。配置文件中的选项覆盖了Redis的各个方面,包括服务器端口、数据持久化、内存管理、安全控制、日志记录以及客户端限制等。下面将逐一...
cluster-config-file nodes.conf # 集群节点配置文件 cluster-node-timeout 5000 # 节点超时时间 appendonly yes # 开启 AOF 持久化 ``` - **启动 Redis 服务**: ```bash ps -ef | grep redis ``` ##### ...
Redis 配置文件详解 Redis 配置文件(redis.windows.conf)是 Redis 的核心配置文件,用于设置 Redis 服务器的各种参数。下面是配置文件的一些重要设置: * requirepass:设置 Redis 服务器的密码 * port:设置 ...
Redis配置文件详解 Redis的配置文件是redis.conf,下面详细介绍配置文件中的各个配置项: 1. `daemonize no`:设置Redis是否以守护进程方式运行,默认为no,表示Redis以前台进程方式运行。 2. `pidfile /var/run/...
Redis 的配置文件可以根据需要进行修改和优化,以提高 Redis 的性能和安全性。 ### 官方配置分类 Redis 的官方配置可以分为多个类别,包括 INCLUDES、MODULES、NETWORK、GENERAL、SNAPSHOTTING、REPLICATION、...
下面是对 Redis 的常用命令、配置文件、持久化、事务、主从复制、Jedis 使用的详细讲解。 Redis 常用命令 Redis 提供了很多有用的命令来管理和操作数据。下面是一些常用的 Redis 命令: * SET key value:设置...
### Redis配置文件参数详解 #### 一、概述 Redis 是一款高性能的键值对存储系统,广泛应用于缓存、消息队列以及实时数据分析等领域。为了更好地利用 Redis 的强大功能,理解其配置文件中的各项参数至关重要。本文...
配置文件详解: 1. **redis.conf**:这是Redis服务器的基础配置文件,用于设置服务器的基本参数,如端口、数据库数量等。在集群模式下,需要额外添加一些集群相关的配置项。 2. **cluster-enabled yes**:开启集群...
Redis 配置参数详解 Redis 作为流行的 NoSQL 数据库,配置参数的设置对其性能和安全性有着至关重要的影响。本文将对 Redis 配置参数进行详细的解释和分析。 GENERAL * `daemonize`:该参数用于指定 Redis 是否以...