# Redis configuration file example Redis 配置文件示例 # Note on units: when memory size is needed, it is possible to specify # 注意单位:当需要设置内存大小的时候,可以使用常用的格式1k 5GB 4M等等: # it in the usual form of 1k 5GB 4M and so forth: # # 1k => 1000 bytes # 1kb => 1024 bytes # 1m => 1000000 bytes # 1mb => 1024*1024 bytes # 1g => 1000000000 bytes # 1gb => 1024*1024*1024 bytes # # units are case insensitive so 1GB 1Gb 1gB are all the same. # 单位大小写不敏感,因此1GB、1Gb、1gB都是一样的 ################################## INCLUDES ################################### # Include one or more other config files here. This is useful if you # have a standard template that goes to all Redis servers but also need # to customize a few per-server settings. Include files can include # other files, so use this wisely. # 这里可以引入一个或多个其他的配置文件。当你有一个标准的模板去运行所有的Redis # 服务,但是对每个服务有少量修改的时候,非常有用。Include files可以包含其他文件, # 因此应用很广泛。 # # Notice option "include" won't be rewritten by command "CONFIG REWRITE" # from admin or Redis Sentinel. Since Redis always uses the last processed # line as value of a configuration directive, you'd better put includes # at the beginning of this file to avoid overwriting config change at runtime. # # 注意“include”操作不能被来自admin或者Redis 守护的“CONFIG REWRITE”命令重写, # 由于Redis通常使用最后处理行,最好是把这个文件放在首行避免在运行时候重写 # If instead you are interested in using includes to override configuration # options, it is better to use include as the last line. # 如果你喜欢使用覆盖重写可以使用下面的写法 # # include /path/to/local.conf # include /path/to/other.conf ################################ GENERAL ##################################### # By default Redis does not run as a daemon. Use 'yes' if you need it. # Redis默认不是以守护进程的方式运行的,如果需要请设置为“yes”。 # Note that Redis will write a pid file in /var/run/redis.pid when daemonized. # 注意的是一旦使用守护方式运行将会输出进程id到一个pid文件 daemonize yes # When running daemonized, Redis writes a pid file in /var/run/redis.pid by # default. You can specify a custom pid file location here. # 当运行守护模式,Redis默认将pid写到文件var/run/redis.pid。我们可以自定义pid文件 pidfile /var/run/redis/redis.pid # Accept connections on the specified port, default is 6379. # If port 0 is specified Redis will not listen on a TCP socket. # 接受连接的自定义端口,默认为6379。 # 如果设置为0,Redis将不能通过TCP连接 port 6379 # TCP listen() backlog. # TCP 监听备份日志 # # In high requests-per-second environments you need an high backlog in order # to avoid slow clients connections issues. Note that the Linux kernel # will silently truncate it to the value of /proc/sys/net/core/somaxconn so # make sure to raise both the value of somaxconn and tcp_max_syn_backlog # in order to get the desired effect. # 在高并发请求环境需要设置高备份日志去避免客户端慢连接问题。注意 :Linux内核 # 会默认截取 /proc/sys/net/core/somaxconn 的值,因此需要提高somaxconn和 # tcp_max_syn_backlog的值以达到效果 tcp-backlog 511 # By default Redis listens for connections from all the network interfaces # available on the server. It is possible to listen to just one or multiple # interfaces using the "bind" configuration directive, followed by one or # more IP addresses. # 侦听服务器上网络的连接的默认接口。可以有一个或多个IP绑定到多个接口配置项, # # Examples: # 例子: # # bind 192.168.1.100 10.0.0.1 bind 127.0.0.1 # Specify the path for the Unix socket that will be used to listen for # incoming connections. There is no default, so Redis will not listen # on a unix socket when not specified. # 定义用于侦听传入连接的Unix套接字路径,这里没有默认值,因此当需要UNIX # 套接字连接时需要定义此项目 # # unixsocket /tmp/redis.sock # unixsocketperm 700 # Close the connection after a client is idle for N seconds (0 to disable) timeout 0 # 当客户端连接空闲N秒时关闭连接(0表示禁用,不关闭) # TCP keepalive. # TCP 长连接 # # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence # of communication. This is useful for two reasons: # # 如果非0,使用SO_KEEPALIVE给客户端发送TCP ACKS在缺乏连接的情况下: # # 1) Detect dead peers. # 2) Take the connection alive from the point of view of network # equipment in the middle. # 1)僵尸节点的检测 # 2)从中间网络设备的角度连接 # # On Linux, the specified value (in seconds) is the period used to send ACKs. # Note that to close the connection the double of the time is needed. # On other kernels the period depends on the kernel configuration. # 在Linux中,默认值(秒)一般事情发送ACKS。注意的是关闭连接需要两倍时间。 # 其他内核中处理期是依赖内核配置的 # # A reasonable value for this option is 60 seconds. # 一个合理的配置是 60秒 tcp-keepalive 0 # Specify the server verbosity level. # This can be one of: # debug (a lot of information, useful for development/testing) # verbose (many rarely useful info, but not a mess like the debug level) # notice (moderately verbose, what you want in production probably) # warning (only very important / critical messages are logged) # 定义服务器冗余等级 # 可以设置为以下几种 # debug:大量信息,用于开发或测试 # verbose:很多有用的信息,但是不像debug那么乱 # notice:可能需要的适量有用信息, # worning:只记录非常重要,至关重要的信息 loglevel notice # Specify the log file name. Also the empty string can be used to force # Redis to log on the standard output. Note that if you use standard # output for logging but daemonize, logs will be sent to /dev/null # 定义日志文件名称。也可以指控强制Redis使用标准输出。注意的是如果使用使用 # 标准输出而不是使用守护模式,日志将会被发送到 /dev/null logfile /var/log/redis/redis.log # To enable logging to the system logger, just set 'syslog-enabled' to yes, # and optionally update the other syslog parameters to suit your needs. # 打开日志记录到系统日志,将'syslog-enabled' 设置为 “yes”,可以根据需求自 # 行修改其他日志参数 # syslog-enabled no # Specify the syslog identity # 定义日志身份. # syslog-ident redis # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7. # 自定义syslog,必须是一各用户或者在 LOCAL0-LOCAL7中 # syslog-facility local0 # Set the number of databases. The default database is DB 0, you can select # a different one on a per-connection basis using SELECT <dbid> where # dbid is a number between 0 and 'databases'-1 # 设置数据库的数目,默认的数据库是DB 0,在每次连接时候可以通过 SELECT <dbid> # 选择不同的数据库。(dbid的范围在0和数据库数目-1之间) databases 16 ################################ SNAPSHOTTING ################################ # # Save the DB on disk: # 保存数据到磁盘: # # save <seconds> <changes> # # Will save the DB if both the given number of seconds and the given # number of write operations against the DB occurred. # 在符合给定参数的情况发生时,将数据写入磁盘 # # In the example below the behaviour will be to save: # 一些例子在下面情况下将会发生: # after 900 sec (15 min) if at least 1 key changed # 15分钟内至少一个key发生变化 # after 300 sec (5 min) if at least 10 keys changed # 300秒内 至少10个key发生变化 # after 60 sec if at least 10000 keys changed # 50秒内 至少1000个key发生变化 # # Note: you can disable saving completely by commenting out all "save" lines. # 注意:可以禁用完全保存,通过所有“save”行 # It is also possible to remove all the previously configured save # points by adding a save directive with a single empty string argument # like in the following example: # 可以清除之前的的配置保存点,通过一个单独的空参数,如: # # save "" save 900 1 save 300 10 save 60 10000 # By default Redis will stop accepting writes if RDB snapshots are enabled # (at least one save point) and the latest background save failed. # This will make the user aware (in a hard way) that data is not persisting # on disk properly, otherwise chances are that no one will notice and some # disaster will happen. # 默认将会停止接受写,如果RDB镜像可以用或者最近的后台保存错误 # 将很难将数据持久化到磁盘上,否则异常发生 # # If the background saving process will start working again Redis will # automatically allow writes again. # 如果后台进程开始工作 RDIS将会再次自动开始写入 # # However if you have setup your proper monitoring of the Redis server # and persistence, you may want to disable this feature so that Redis will # continue to work as usual even if there are problems with disk, # permissions, and so forth. # 然而当你设置一个服务器监听或者持久化时候,你会想禁用这些特性保障Redis和正常 # 一样工作,除非硬盘、权限等等发生问题 stop-writes-on-bgsave-error yes # Compress string objects using LZF when dump .rdb databases? # For default that's set to 'yes' as it's almost always a win. # If you want to save some CPU in the saving child set it to 'no' but # the dataset will likely be bigger if you have compressible values or keys. # 当dump rdb时候使用压缩:默认设置是yes,通常着用用户 # 如果想保存一些cpu在子保存设备,可以设置为no,但是数据会变得很多,用使用可以 # 压缩的值或key # rdbcompression yes # Since version 5 of RDB a CRC64 checksum is placed at the end of the file. # This makes the format more resistant to corruption but there is a performance # hit to pay (around 10%) when saving and loading RDB files, so you can disable it # for maximum performances. # # RDB files created with checksum disabled have a checksum of zero that will # tell the loading code to skip the check. # 从RDB 5开始对文件尾部做CRC64校验和 rdbchecksum yes # The filename where to dump the DB # dump数据库的名称 dbfilename dump.rdb # The working directory. # 工作目录 # # The DB will be written inside this directory, with the filename specified # above using the 'dbfilename' configuration directive. # 数据库将会被写在内部目 和定义的文件名都在配置项 # # The Append Only File will also be created inside this directory. # 在这个目录附加到一个文件 # Note that you must specify a directory here, not a file name. # 必须设置一个目录在这,而不是一个文件名 dir /var/lib/redis/ ################################# REPLICATION ################################# # Master-Slave replication. Use slaveof to make a Redis instance a copy of # another Redis server. A few things to understand ASAP about Redis replication. # 主从复制,使用slaveof使得一个Reids实例被复制到另一台Redis服务器。需要尽快了解 # 的一些关于Redis复制的事情: # # 1) Redis replication is asynchronous, but you can configure a master to # stop accepting writes if it appears to be not connected with at least # a given number of slaves. # 2) Redis slaves are able to perform a partial resynchronization with the # master if the replication link is lost for a relatively small amount of # time. You may want to configure the replication backlog size (see the next # sections of this file) with a sensible value depending on your needs. # 3) Replication is automatic and does not need user intervention. After a # network partition slaves automatically try to reconnect to masters # and resynchronize with them. # 1)Redis复制是异步进行的,但是可以配置一个主停止接受写当至少一个从库不能连接时 # 2)Redis从库能够在连接段时间断开时部分同步。你可以根据需求配置合适的复制文件 # 大小(看本文下一部分) # 3)在从库自动连接主库并同步完成,复制可以不要用户操作而自动执行, # # slaveof <masterip> <masterport> # If the master is password protected (using the "requirepass" configuration # directive below) it is possible to tell the slave to authenticate before # starting the replication synchronization process, otherwise the master will # refuse the slave request. # 如果主库的密码是受保护的(使用下面的 requirepass 配置项),可以先告诉从库授权 # 在开始同步之前,否则主库将会拒绝连接 # # masterauth <master-password> # When a slave loses its connection with the master, or when the replication # is still in progress, the slave can act in two different ways: # 当从库和主库断开连接,或者复制还在运行,从库可以通过两种不同方式进行 # # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will # still reply to client requests, possibly with out of date data, or the # data set may just be empty if this is the first synchronization. # # 2) if slave-serve-stale-data is set to 'no' the slave will reply with # an error "SYNC with master in progress" to all the kind of commands # but to INFO and SLAVEOF. # 1)如果 slave-serve-stale-data 设置为‘yes’(默认值),从库将会回复客户过时 # 的数据,或者第一次访问时候返回空 # 2)如果 slave-serve-stale-data 设置为‘no’从库将会给除了INFO和SLAVEOF之外的 # 所有命令返回"SYNC with master in progress" # # slave-serve-stale-data yes # You can configure a slave instance to accept writes or not. Writing against # a slave instance may be useful to store some ephemeral data (because data # written on a slave will be easily deleted after resync with the master) but # may also cause problems if clients are writing to it because of a # misconfiguration. # 可以配置从库实例是否接受写。禁止写实例可能用于存储一些临时的数据(数据从主库同步后 # 写在从库比删快)。由于配置错误 但是可能引起一些问题,假如客户端在写 # # Since Redis 2.6 by default slaves are read-only. # 从2.6起 默认是只读配置 # Note: read only slaves are not designed to be exposed to untrusted clients # on the internet. It's just a protection layer against misuse of the instance. # Still a read only slave exports by default all the administrative commands # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve # security of read only slaves using 'rename-command' to shadow all the # administrative / dangerous commands. # 注意:默认只读不能用于检测不被信任的连接。至少用于防止不当使用的一个实例。 # 导出一个只读从库需要管理命令 如CONFIG ,DEBUG等等,在有限的范围内,使用 # “重命名命令”来隐藏所有的 管理/危险 命令以提高只读的安全性, slave-read-only yes # Replication SYNC strategy: disk or socket. # 复制同步策略:磁盘或套接字 # # ------------------------------------------------------- # WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY # 警告:无盘复制当前还在实验阶段 # ------------------------------------------------------- # # New slaves and reconnecting slaves that are not able to continue the replication # process just receiving differences, need to do what is called a "full # synchronization". An RDB file is transmitted from the master to the slaves. # The transmission can happen in two different ways: # 新的从库或者重连接的从库 不能够继续进行复制只能收到不同信息,需要调用“full syncharonization” # 命令。一个RDB文件将会从主库传送到从库。这个传送有两种不同方式: # # # 1) Disk-backed: The Redis master creates a new process that writes the RDB # file on disk. Later the file is transferred by the parent # process to the slaves incrementally. # 2) Diskless: The Redis master creates a new process that directly writes the # RDB file to slave sockets, without touching the disk at all. # 1)磁盘备份:主库将会创建一个新的进程在磁盘上写入RDB文件,之后RDB文件将会由 # 由父进程递增地传入从库 # 2)无盘备份:主库创建一个新的进程直接将RDB文件写入从库套接字。不会在磁盘上写。 # # With disk-backed replication, while the RDB file is generated, more slaves # can be queued and served with the RDB file as soon as the current child producing # the RDB file finishes its work. With diskless replication instead once # the transfer starts, new slaves arriving will be queued and a new transfer # will start when the current one terminates. # 通过磁盘备份复制,当rdb文件被创建后,许多在队列中等待的RDB文件的从库将会尽快的处理完 # RDB文件。通过无盘复制可以代替一次传送的启动,新的从库到达队列后需要等到当前传输终止 # 才能够进行。 # # When diskless replication is used, the master waits a configurable amount of # time (in seconds) before starting the transfer in the hope that multiple slaves # will arrive and the transfer can be parallelized. # 使用无盘复制时,主库希望配置一个时间(单位:秒)在期待传输钱,以保障多从库到达时候传送 # 能够并行执行 # # With slow disks and fast (large bandwidth) networks, diskless replication # works better. # 使用慢磁盘和快(带宽大)网络时,无盘复制会更好 repl-diskless-sync no # When diskless replication is enabled, it is possible to configure the delay # the server waits in order to spawn the child that trnasfers the RDB via socket # to the slaves. # 当无盘复制被启用。可能需要配置延迟服务器等待以确保分出子进程去通过套接字将RDB传入从库。 # # This is important since once the transfer starts, it is not possible to serve # new slaves arriving, that will be queued for the next RDB transfer, so the server # waits a delay in order to let more slaves arrive. # 重要的是由于一旦传送开始,就不能够给新到达的从库提供服务,新的从库将会在队列中等待 #下次RDB传输, 因此服务器等待一个延迟以确保 让更多的从库到达 # # The delay is specified in seconds, and by default is 5 seconds. To disable # it entirely just set it to 0 seconds and the transfer will start ASAP. # 这个延迟以秒为单位,默认5s,禁用时只需要设置为0那么传送将会尽快开始 repl-diskless-sync-delay 5 # Slaves send PINGs to server in a predefined interval. It's possible to change # this interval with the repl_ping_slave_period option. The default value is 10 # seconds. # 从库在一个预定的时间间隔内给服务器服务器发生PINGs。能够通过修改 # repl_ping_slave_period调整时间间隔。默认10s # # repl-ping-slave-period 10 # The following option sets the replication timeout for: # 下面操作是设置传送超时的: # # 1) Bulk transfer I/O during SYNC, from the point of view of slave. # 2) Master timeout from the point of view of slaves (data, pings). # 3) Slave timeout from the point of view of masters (REPLCONF ACK pings). # 1)从从库的角度看带来的传送I/O在同步中 # 2)从从库看主库超时(数据,PINGs) # 3)在主库可以通过ACK看到从库超时 # # It is important to make sure that this value is greater than the value # specified for repl-ping-slave-period otherwise a timeout will be detected # every time there is low traffic between the master and the slave. # 需要确保 repl-timeout的值答应定义的 repl-ping-slave-period的值,否则将会 # 每次检测超时,主从之间的传输就会变得很慢 # # repl-timeout 60 # Disable TCP_NODELAY on the slave socket after SYNC? # 是否在同步后禁用从库套接字TCP_NODELAY # # If you select "yes" Redis will use a smaller number of TCP packets and # less bandwidth to send data to slaves. But this can add a delay for # the data to appear on the slave side, up to 40 milliseconds with # Linux kernels using a default configuration. # 如果设置为“yes”,Redis会占用一点点带宽发送少量的TCP包给从库。但是会对数据 # 在从库上出现产生一些延迟,多达40毫秒,取决于LINUX 内核的默认设置 # # If you select "no" the delay for data to appear on the slave side will # be reduced but more bandwidth will be used for replication. # 设置为“no”是将会降低延迟,但是会占用很多带宽。 # # By default we optimize for low latency, but in very high traffic conditions # or when the master and slaves are many hops away, turning this to "yes" may # be a good idea. # 默认设置优化为低延迟,但是在多连接或者主从之间有很多跳的时候,设置为yes会好些 repl-disable-tcp-nodelay no # Set the replication backlog size. The backlog is a buffer that accumulates # slave data when slaves are disconnected for some time, so that when a slave # wants to reconnect again, often a full resync is not needed, but a partial # resync is enough, just passing the portion of data the slave missed while # disconnected. # 设置复制备份文件大小。备份文件是当从库偶尔断开是的缓存积累,当一个从库重新连接后 # 经常不需要完全传送,只需要一个部分传送继续。只传送从库在断开连接时缺失的部分 # # The bigger the replication backlog, the longer the time the slave can be # disconnected and later be able to perform a partial resynchronization. # 大备份文件,长时间的备份会引起从库断开连接或者延迟,或者部分同步 # # The backlog is only allocated once there is at least a slave connected. # 至少有一个从库连接时需要分配backlog # # repl-backlog-size 1mb # After a master has no longer connected slaves for some time, the backlog # will be freed. The following option configures the amount of seconds that # need to elapse, starting from the time the last slave disconnected, for # the backlog buffer to be freed. # 主库不在被连接时,backlog将会被释放。下面的配置释放需要的延迟时间(秒): # 从最后一个从库断开到backlog缓存被释放 # # A value of 0 means to never release the backlog. # 0 表示不释放backlog:而不是立即释放 # repl-backlog-ttl 3600 # The slave priority is an integer number published by Redis in the INFO output. # It is used by Redis Sentinel in order to select a slave to promote into a # master if the master is no longer working correctly. # 从库的优先级是一个整型数( REDIS中用INFO 可以输出)。 # 使用REDIS守护确保选择一个从库当主库不在正常工作时。 # # A slave with a low priority number is considered better for promotion, so # for instance if there are three slaves with priority 10, 100, 25 Sentinel will # pick the one with priority 10, that is the lowest. # 数字越小优先级越高,因此一个实例中如果从库优先级有10,100,25,守护将会现在最 # 小的10 # # However a special priority of 0 marks the slave as not able to perform the # role of master, so a slave with priority of 0 will never be selected by # Redis Sentinel for promotion. # 但是特殊定义0,意味着从库不能扮演主库的角色,因此优先级设置为0的从库,不会被选中 # # By default the priority is 100. # 默认的优先级是100 slave-priority 100 # It is possible for a master to stop accepting writes if there are less than # N slaves connected, having a lag less or equal than M seconds. # 少于N个从库连接时候,主库可能需要停止写,需要设置一个少于或等于M秒的参数 # # The N slaves need to be in "online" state. # N个从库应该为“online”状态 # # The lag in seconds, that must be <= the specified value, is calculated from # the last ping received from the slave, that is usually sent every second. # 滞后的秒数必须小于或等于定义的值,是有最后的来自从库的ping值来计算的,通常 # ping每秒都会发送。 # This option does not GUARANTEE that N replicas will accept the write, but # will limit the window of exposure for lost writes in case not enough slaves # are available, to the specified number of seconds. # 这个操作不能保障 N个复制接受写,但是会限制暴露的窗口使用情况以防没有足够的从库 # 可用,去定义秒数 # For example to require at least 3 slaves with a lag <= 10 seconds use: # 假如需要3个从库在小于10秒的延迟后使用。 # min-slaves-to-write 3 # min-slaves-max-lag 10 # # Setting one or the other to 0 disables the feature. # 设置为1 或遏制另外一个设置为0会禁止特性 # # By default min-slaves-to-write is set to 0 (feature disabled) and # min-slaves-max-lag is set to 10. # 默认设置 min-slaves-to-write是0(禁用) min-slaves-max-lag是10 ################################## SECURITY ################################### # Require clients to issue AUTH <PASSWORD> before processing any other # commands. This might be useful in environments in which you do not trust # others with access to the host running redis-server. # 在客户端处理前需要验证AUTH。可以用于不是特别被信任的环境运行的redis-server # This should stay commented out for backward compatibility and because most # people do not need auth (e.g. they run their own servers). # 应该保持注释掉的向后兼容性,因为大多人不需要auth(运行自己的服务器) # # Warning: since Redis is pretty fast an outside user can try up to # 150k passwords per second against a good box. This means that you should # use a very strong password otherwise it will be very easy to break. # 警告:由于redis很快,外部用户可以从一个窗口以150k/s的速度发送密码来攻击。 # 必须设置一个非常健壮的密码,否则很容易被攻破。 # requirepass foobared # Command renaming. # 重命名命令 # # It is possible to change the name of dangerous commands in a shared # environment. For instance the CONFIG command may be renamed into something # hard to guess so that it will still be available for internal-use tools # but not available for general clients. # 在共享环境中有可能需要修改一些危险命令。一个实例中的CONFIG 命令可以被重命名为 # 一些难以猜的命令,因此在内部使用时候还是有效的,但是一般的客户端是无效的 # # Example: # 例如 # # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 # # It is also possible to completely kill a command by renaming it into # an empty string: # 也可以通过设置为空来关闭这个命令, # # rename-command CONFIG "" # # Please note that changing the name of commands that are logged into the # AOF file or transmitted to slaves may cause problems. # 请注意,修改命令的名称不能够记录到AOF文件或者传送到从服,否则会引起问题 ################################### LIMITS #################################### # Set the max number of connected clients at the same time. By default # this limit is set to 10000 clients, however if the Redis server is not # able to configure the process file limit to allow for the specified limit # the max number of allowed clients is set to the current file limit # minus 32 (as Redis reserves a few file descriptors for internal uses). # 设置最大客户端同一时间连接数。默认设置10000个连接,但是如果Redis服务器不能支持 # 处理这么多文件,应以允许指定的限制允许的客户端的最大数量被设置为当前文件限制 # 减去32(作为Redis的保留了一个一些文件描述符供内部使用)。 # # Once the limit is reached Redis will close all the new connections sending # an error 'max number of clients reached'. # 一旦连接数到达Redis将会不按包所有新的连接 并抛出“max number of clients reached”。 # # maxclients 10000 # Don't use more memory than the specified amount of bytes. # When the memory limit is reached Redis will try to remove keys # according to the eviction policy selected (see maxmemory-policy). # 不要使用多余定义位数的内存,当内存占用达到时候,Redis将会尝试根据回收规则删除keys # (参见:最大内存规则) # # If Redis can't remove keys according to the policy, or if the policy is # set to 'noeviction', Redis will start to reply with errors to commands # that would use more memory, like SET, LPUSH, and so on, and will continue # to reply to read-only commands like GET. # 如果Redis不能够根据规则移除keys,或者规则被设置为“noeviction”,如果要使用更多内存 #Redis将会在执行命令时候返回错误,像SET ,LPUSH等到呢个,可以继续执行只读命令如 GET # # # This option is usually useful when using Redis as an LRU cache, or to set # a hard memory limit for an instance (using the 'noeviction' policy). # 这个操作用于当REDIS作为LRU缓存时候,或者设置一个代内存限制一个实例 # # WARNING: If you have slaves attached to an instance with maxmemory on, # the size of the output buffers needed to feed the slaves are subtracted # from the used memory count,so that network problems / resyncs will # not trigger a loop where keys are evicted, and in turn the output # buffer of slaves is full with DELs of keys evicted triggering the deletion # of more keys, and so forth until the database is completely emptied. # 如果有从库依附于一个设置了maxmemory的实例,输出输出缓存的大小需要支持从何 # 警告:如果有从库访问一个maxmerory打开的实例,这个支持从库的输出缓存的大小需要从已使用 # 的内存数中减去,使得网络问题、重新同步不会在keys被移除时触发循环,依次输出的从库缓存被 # 已经释放key的删除占满,触发删除更多key。以此类推知道数据库被完全清空 # # In short... if you have slaves attached it is suggested that you set a lower # limit for maxmemory so that there is some free RAM on the system for slave # output buffers (but this is not needed if the policy is 'noeviction'). # 简而言之。如果有从库依附,建议设置一个小的现在给maxmemory.区别有空闲的内存留给,从库 # 缓存输出(如果规则是“noeviction” 将不需要 ) # # maxmemory <bytes> # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory # is reached. You can select among five behaviors: # MAXMEMORY 规则:redis是如何在最大内存到达时选择移除的,有5种方式: # # volatile-lru -> remove the key with an expire set using an LRU algorithm # allkeys-lru -> remove any key according to the LRU algorithm # volatile-random -> remove a random key with an expire set # allkeys-random -> remove a random key, any key # volatile-ttl -> remove the key with the nearest expire time (minor TTL) # noeviction -> don't expire at all, just return an error on write operations # # volatile-lru -> 删除过期key使用LRU计算 # allkeys-lru -> 删除所有key使用LRU计算 # volatile-random -> 根据过期规则删除 一个 key # allkeys-random -> 删除任意key # volatile-ttl -> 根据最近过期时间删除一个key(minor TTL) # noeviction -> 不过期,写的时候返回错误。 # Note: with any of the above policies, Redis will return an error on write # operations, when there are no suitable keys for eviction. # # At the date of writing these commands are: set setnx setex append # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby # getset mset msetnx exec sort # 注意: 使用以上热河规则时候,Redis都会在没有符合条件的释放key时发生写错误 # # 当前设置这些命令: set setnx setex append # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby # getset mset msetnx exec sort # # The default is: # 默认规则是: # # maxmemory-policy volatile-lru # LRU and minimal TTL algorithms are not precise algorithms but approximated # algorithms (in order to save memory), so you can select as well the sample # size to check. For instance for default Redis will check three keys and # pick the one that was used less recently, you can change the sample size # using the following configuration directive. # LRU和自动延迟计算不是精确的计算但是是准确计算(为了保存内存),因可以可以选择 # 一个样本中的大小检验。 对Redis默认实例将会检验3个keys选择一个最近很少使用的key, # 可以使用下面的配置项修改样本大小 # # maxmemory-samples 3 ############################## APPEND ONLY MODE ############################### # By default Redis asynchronously dumps the dataset on disk. This mode is # good enough in many applications, but an issue with the Redis process or # a power outage may result into a few minutes of writes lost (depending on # the configured save points). # 通过默认的REDIS异步转存数据到磁盘。这种模式对于很多应用来说已经很好,但是一个 # redis处理问题或者电源故障都会引起几分钟的写丢失(取决于:设置的保存点) # # The Append Only File is an alternative persistence mode that provides # much better durability. For instance using the default data fsync policy # (see later in the config file) Redis can lose just one second of writes in a # dramatic event like a server power outage, or a single write if something # wrong with the Redis process itself happens, but the operating system is # still running correctly. # AOF 方式是代替rdb的一个持久方式。实例中使用默认的数据同步方式(配置文件下面) # REDIS会在异常中丢失1秒的写(如电源故障,或者只写是REDIS进程自己发生异常), # 但是服务器正常运行。 # # AOF and RDB persistence can be enabled at the same time without problems. # If the AOF is enabled on startup Redis will load the AOF, that is the file # with the better durability guarantees. # AOF 和RDB 可以同时执行。如果AOF打开,启动REDIS时候会加载AOF,因为这个文件有好的 # 持久保障 # # Please check http://redis.io/topics/persistence for more information. # 了解更多内容,请参见: appendonly no # The name of the append only file (default: "appendonly.aof") # AOF 文件名称(默认:“appendonly.aof”) appendfilename "appendonly.aof" # The fsync() call tells the Operating System to actually write data on disk # instead of waiting for more data in the output buffer. Some OS will really flush # data on disk, some other OS will just try to do it ASAP. # fsysnc给操作系统发生指令将数据写入磁盘代替等待在数据缓存中的更多数据。一些系统会 # 刷新磁盘数据,也有一些很快处理完 。 # # Redis supports three different modes: # Redis支持有三种模式: # # no: don't fsync, just let the OS flush the data when it wants. Faster. # always: fsync after every write to the append only log. Slow, Safest. # everysec: fsync only one time every second. Compromise. # no:不同步写入。仅仅在需要的时候由操作系统刷新数据。比较快 # always: 一旦有写操作就同步写入文件。慢、最安全 # erverysec:每秒同步写一次。折中。 # # The default is "everysec", as that's usually the right compromise between # speed and data safety. It's up to you to understand if you can relax this to # "no" that will let the operating system flush the output buffer when # it wants, for better performances (but if you can live with the idea of # some data loss consider the default persistence mode that's snapshotting), # or on the contrary, use "always" that's very slow but a bit safer than # everysec. # 默认的方式是“ererysec”,速度和安全中是一个比较折中的合适选择。 取决你的理解 #程度,如果你想为了更好的用户体验,设置为“no”,让操作系统刷新数据输出缓存( 但是如你能接受一些数据快照中的数据丢失,) # # More details please check the following article: 、 # 详细内容参考下面文章: # http://antirez.com/post/redis-persistence-demystified.html # # If unsure, use "everysec". # 如果不确定,请使用“everysec” # appendfsync always appendfsync everysec # appendfsync no # When the AOF fsync policy is set to always or everysec, and a background # saving process (a background save or AOF log background rewriting) is # performing a lot of I/O against the disk, in some Linux configurations # Redis may block too long on the fsync() call. Note that there is no fix for # this currently, as even performing fsync in a different thread will block # our synchronous write(2) call. # 一旦AOF同步写规则本设置成always或者everysec,后台将其保留一个进程产生大量的 # 磁盘I/O,在一些Liunx配置中 REDIS可以阻断太长的rsync()调用。注意这里没有锁定当前 #,设置其他的fsync在一个不同的进程中会阻断同步写 # # # In order to mitigate this problem it's possible to use the following option # that will prevent fsync() from being called in the main process while a # BGSAVE or BGREWRITEAOF is in progress. # 为了减轻我们在使用中的问题。当“BGSAVE”或者“BGREWRITEAOF”运行时,防止fsync #在主进程中被调用 # # This means that while another child is saving, the durability of Redis is # the same as "appendfsync none". In practical terms, this means that it is # possible to lose up to 30 seconds of log in the worst scenario (with the # default Linux settings). # 当另外一个子进程在保存,Redis持久化是和"appendfsync none" 一样的。实际应用中 # 会产生丢失30秒(根据LINUX默认配置) # # If you have latency problems turn this to "yes". Otherwise leave it as # "no" that is the safest pick from the point of view of durability. # 如果有延迟问题 设置此项为“yes”。否则 设置为no将是最安全的持久化 no-appendfsync-on-rewrite no # Automatic rewrite of the append only file. # Redis is able to automatically rewrite the log file implicitly calling # BGREWRITEAOF when the AOF log size grows by the specified percentage. # 自动重写AOF文件 # 当AOF文件,增长到设置的百分比,REDIS能够调用BAREWRITFOF自动在后台重写文件 # # This is how it works: Redis remembers the size of the AOF file after the # latest rewrite (if no rewrite has happened since the restart, the size of # the AOF at startup is used). # 是这样工作的:REDIS在最后一次重写后记录AOF文件的大小(如果重启后没有重写,大小 # 就是AOF文件开始时的大小) # # This base size is compared to the current size. If the current size is # bigger than the specified percentage, the rewrite is triggered. Also # you need to specify a minimal size for the AOF file to be rewritten, this # is useful to avoid rewriting the AOF file even if the percentage increase # is reached but it is still pretty small. # 这个原始大小和当前大小比较,如果当前大小多出定义的百分比,重写将会被触发。 # 也可以给要重新的文件可以定义一个小的大小,可以避免当增长到定义的百分比时还很小 #而重写AOF # # Specify a percentage of zero in order to disable the automatic AOF # 设置百分比为0的时候禁用自动AOF # rewrite feature. # 重写特性 auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb # An AOF file may be found to be truncated at the end during the Redis # startup process, when the AOF data gets loaded back into memory. # This may happen when the system where Redis is running # crashes, especially when an ext4 filesystem is mounted without the # data=ordered option (however this can't happen when Redis itself # crashes or aborts but the operating system still works correctly). # AOF在REDIS开启时候数据被完全加入内存后,可以被清除。 发生在运行REDIS的 # 系统崩溃,特别是ext4文件系统在挂载时候使用data=ordered操作(然后这个不能发生 # 在REDIS自己崩溃或者终止但是系统依然运行时候) # # Redis can either exit with an error when this happens, or load as much # data as possible (the default now) and start if the AOF file is found # to be truncated at the end. The following option controls this behavior. # REDIS可以在异常发生,或者加载了最多文件时候启动而AOF被清空的情况下退出。下面 # 操作来控制这个行为 # # If aof-load-truncated is set to yes, a truncated AOF file is loaded and # the Redis server starts emitting a log to inform the user of the event. # Otherwise if the option is set to no, the server aborts with an error # and refuses to start. When the option is set to no, the user requires # to fix the AOF file using the "redis-check-aof" utility before to restart # the server. # 如果aof-load-truncated 被设置为yes,一个清空的AOF文件被加载,redis服务器开始 # 发送一个日志记录用户事件。否则被设置为no,服务器会异常退出拒绝启动。当此项 # 被设置为no,用户需要在重启服务器前确定"redis-check-aof" 有效 # # Note that if the AOF file will be found to be corrupted in the middle # the server will still exit with an error. This option only applies when # Redis will try to read more data from the AOF file but not enough bytes # will be found. # 注意如果AOF文件中间出现异常,服务器依然将会退出。这个操作紧紧出现在应用redis # 从AOF 文件中读取更多的数据,但是没有足够的空间时候 aof-load-truncated yes ################################ LUA SCRIPTING ############################### # Max execution time of a Lua script in milliseconds. # LUA脚本的最大执行时间:毫秒 # # If the maximum execution time is reached Redis will log that a script is # still in execution after the maximum allowed time and will start to # reply to queries with an error # 如果到达最大执行时间,REDIS将会记录在最大运行时间后的在执行脚本并返回查询异常 # # When a long running script exceeds the maximum execution time only the # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be # used to stop a script that did not yet called write commands. The second # is the only way to shut down the server in the case a write command was # already issued by the script but the user doesn't want to wait for the natural # termination of the script. # 当一个超长的脚本最大执行时间运行时只有 “SCRIPT KILL” 和“SHOWDOWN NOSAVE”命令 # 有效。第一个用于停止不再写, 第二个只是关闭服务,但是用户不想等已经确认的命令执行 # 完毕 # # # Set it to 0 or a negative value for unlimited execution without warnings. # 设置为0 或者负数是不限制执行 没有警告 lua-time-limit 5000 ################################## SLOW LOG ################################### # The Redis Slow Log is a system to log queries that exceeded a specified # execution time. The execution time does not include the I/O operations # like talking with the client, sending the reply and so forth, # but just the time needed to actually execute the command (this is the only # stage of command execution where the thread is blocked and can not serve # other requests in the meantime). # REDIS Slowlog 是记录执行超时时候的查询。这个执行时间不包含I/O操作告诉客户、 # 发送响应等等 # 至是需要实际上执行命令需要的时间(线程被阻断时执行命令的步骤,不同同时支持其他 # 服务) # # You can configure the slow log with two parameters: one tells Redis # what is the execution time, in microseconds, to exceed in order for the # command to get logged, and the other parameter is the length of the # slow log. When a new command is logged the oldest one is removed from the # queue of logged commands. # 可以给slow log定义两个参数: 一个告诉REDIS执行时间,微秒单位,使得超时命令写入 # 日志,另外一个参数记录日志的长度。当一个新的命令记录后,最后一个命令从队列中移除 # The following time is expressed in microseconds, so 1000000 is equivalent # to one second. Note that a negative number disables the slow log, while # a value of zero forces the logging of every command. # 时间使用微秒为单位如1000 000 为1秒。 注意负数会禁用slowlog,0表示实时记录 slowlog-log-slower-than 10000 # There is no limit to this length. Just be aware that it will consume memory. # You can reclaim memory used by the slow log with SLOWLOG RESET. # 这里没有现在长度,但是发现其会消耗内存,可以通过SLOWLOG RESET 回收被slow log占用 # 的内存 slowlog-max-len 128 ################################ LATENCY MONITOR 后台监控############################## # The Redis latency monitoring subsystem samples different operations # at runtime in order to collect data related to possible sources of # latency of a Redis instance. # REDIS运行时候, 后台子系统案例的不同操作 收集redis 实例的实例的数据 # # Via the LATENCY command this information is available to the user that can # print graphs and obtain reports. # 通过LATENCY命令给用户打出图像和获取报告 # # The system only logs operations that were performed in a time equal or # greater than the amount of milliseconds specified via the # latency-monitor-threshold configuration directive. When its value is set # to zero, the latency monitor is turned off. # 系统只记录执行等与或超过定的时间的操作(微秒),通过latency-monitor-threshold # 设置为0是关闭后台建库 # # By default latency monitoring is disabled since it is mostly not needed # if you don't have latency issues, and collecting data has a performance # impact, that while very small, can be measured under big load. Latency # monitoring can easily be enalbed at runtime using the command # "CONFIG SET latency-monitor-threshold <milliseconds>" if needed. # 默认清空下后台监控是关闭的,因为几乎不需要。如果你没有后台问题,收集数据有性能 # 影响 ,这个地方设置非常小。可以在用的大负载时候的评估检测。如果需要后台监控很容易 # 在运行时用命令设置(CONFIG SET latency-monitor-threshold <微秒>" ) latency-monitor-threshold 0 ############################# Event notification 注意事项############################## # Redis can notify Pub/Sub clients about events happening in the key space. # Redis 可以给Pub/Sub客户端发送定义key的范围的通知 # This feature is documented at http://redis.io/topics/notifications # 文件参考 # # For instance if keyspace events notification is enabled, and a client # performs a DEL operation on key "foo" stored in the Database 0, two # messages will be published via Pub/Sub: # 如果实例中的key空间事件通知开启一个客户端删除在db“foo”,PUB/Sub 将会发送两条信息。 # # PUBLISH __keyspace@0__:foo del # PUBLISH __keyevent@0__:del foo # # It is possible to select the events that Redis will notify among a set # of classes. Every class is identified by a single character: # 选择一个REDIS通知类型中的事件:所有类型都用一个字母定义 # # K Keyspace events, published with __keyspace@<db>__ prefix. # E Keyevent events, published with __keyevent@<db>__ prefix. # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... # $ String commands # l List commands # s Set commands # h Hash commands # z Sorted set commands # x Expired events (events generated every time a key expires) # e Evicted events (events generated when a key is evicted for maxmemory) # A Alias for g$lshzxe, so that the "AKE" string means all the events. # K key空间事件,烦死在 __keyspace@<db>__ prefix. # E key事件 __keyevent@<db>__ prefix. # g 没有特殊定义的通用命令DEL, EXPIRE, RENAME, ... # $ 字符串命令 # l 命令列表 # s 设置命令 # h 哈希命令 # z 有序设置命令 # x 过期事件(events generated every time a key expires) # e 销毁事件(events generated when a key is evicted for maxmemory) # A 别名 “g$lshzxe”, "AKE" 指所有事件 # # The "notify-keyspace-events" takes as argument a string that is composed # of zero or multiple characters. The empty string means that notifications # are disabled. # "notify-keyspace-events" 使用0个或多个字符做为参数,空参数表示不启用。 # # # Example: to enable list and generic events, from the point of view of the # event name, use: # 例子: 从事件名称的角度,打开列表和基本事件使用 # # notify-keyspace-events Elg # # Example 2: to get the stream of the expired keys subscribing to channel # name __keyevent@0__:expired use: # 例子2: 获取订阅的渠道的过期key的名称使用 # # notify-keyspace-events Ex # # By default all notifications are disabled because most users don't need # this feature and the feature has some overhead. Note that if you don't # specify at least one of K or E, no events will be delivered. # 由于大部分用户不需要使用这个特性,所以通知是被默认关闭的。注意如果使用一个 # ‘K’或 ‘E’的时候,不会有事件被发送 notify-keyspace-events "" ############################### ADVANCED CONFIG ############################### # Hashes are encoded using a memory efficient data structure when they have a # small number of entries, and the biggest entry does not exceed a given # threshold. These thresholds can be configured using the following directives. # 哈希是高效的内存数据结构编码方式,当有少量条目,大条目不超过给定域。这些域 # 可以用下面项来配置: hash-max-ziplist-entries 512 hash-max-ziplist-value 64 # Similarly to hashes, small lists are also encoded in a special way in order # to save a lot of space. The special representation is only used when # you are under the following limits: # 简单的哈希,小列表也会哈希为节省空间。特殊的表示仅用于你在下面的限制 list-max-ziplist-entries 512 list-max-ziplist-value 64 # Sets have a special encoding in just one case: when a set is composed # of just strings that happen to be integers in radix 10 in the range # of 64 bit signed integers. # The following configuration setting sets the limit in the size of the # set in order to use this special memory saving encoding. # 设置特定编码仅在下面这种情况:当一组由刚刚字符串恰好是在64位有符号整数的范围 # 整数基数10。 # 下面的设置限制制定内存的编码 set-max-intset-entries 512 # Similarly to hashes and lists, sorted sets are also specially encoded in # order to save a lot of space. This encoding is only used when the length and # elements of a sorted set are below the following limits: # 常见的哈希和列表、排序、sets 也会被特定编码。仅用于一个有序堆栈的元素做下面的限制 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 # HyperLogLog sparse representation bytes limit. The limit includes the # 16 bytes header. When an HyperLogLog using the sparse representation crosses # this limit, it is converted into the dense representation. # # A value greater than 16000 is totally useless, since at that point the # dense representation is more memory efficient. # # The suggested value is ~ 3000 in order to have the benefits of # the space efficient encoding without slowing down too much PFADD, # which is O(N) with the sparse encoding. The value can be raised to # ~ 10000 when CPU is not a concern, but space is, and the data set is # composed of many HyperLogLogs with cardinality in the 0 - 15000 range. hll-sparse-max-bytes 3000 # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in # order to help rehashing the main Redis hash table (the one mapping top-level # keys to values). The hash table implementation Redis uses (see dict.c) # performs a lazy rehashing: the more operation you run into a hash table # that is rehashing, the more rehashing "steps" are performed, so if the # server is idle the rehashing is never complete and some more memory is used # by the hash table. # # The default is to use this millisecond 10 times every second in order to # actively rehash the main dictionaries, freeing memory when possible. # # If unsure: # use "activerehashing no" if you have hard latency requirements and it is # not a good thing in your environment that Redis can reply from time to time # to queries with 2 milliseconds delay. # # use "activerehashing yes" if you don't have such hard requirements but # want to free memory asap when possible. activerehashing yes # The client output buffer limits can be used to force disconnection of clients # that are not reading data from the server fast enough for some reason (a # common reason is that a Pub/Sub client can't consume messages as fast as the # publisher can produce them). # 客户端输出缓存现在适用于强制断开客户端但 # # The limit can be set differently for the three different classes of clients: # # normal -> normal clients including MONITOR clients # slave -> slave clients # pubsub -> clients subscribed to at least one pubsub channel or pattern # # The syntax of every client-output-buffer-limit directive is the following: # # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds> # # A client is immediately disconnected once the hard limit is reached, or if # the soft limit is reached and remains reached for the specified number of # seconds (continuously). # So for instance if the hard limit is 32 megabytes and the soft limit is # 16 megabytes / 10 seconds, the client will get disconnected immediately # if the size of the output buffers reach 32 megabytes, but will also get # disconnected if the client reaches 16 megabytes and continuously overcomes # the limit for 10 seconds. # # By default normal clients are not limited because they don't receive data # without asking (in a push way), but just after a request, so only # asynchronous clients may create a scenario where data is requested faster # than it can read. # # Instead there is a default limit for pubsub and slave clients, since # subscribers and slaves receive data in a push fashion. # # Both the hard or the soft limit can be disabled by setting them to zero. client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 # Redis calls an internal function to perform many background tasks, like # closing connections of clients in timeout, purging expired keys that are # never requested, and so forth. # # Not all tasks are performed with the same frequency, but Redis checks for # tasks to perform according to the specified "hz" value. # # By default "hz" is set to 10. Raising the value will use more CPU when # Redis is idle, but at the same time will make Redis more responsive when # there are many keys expiring at the same time, and timeouts may be # handled with more precision. # # The range is between 1 and 500, however a value over 100 is usually not # a good idea. Most users should use the default of 10 and raise this up to # 100 only in environments where very low latency is required. hz 10 # When a child rewrites the AOF file, if the following option is enabled # the file will be fsync-ed every 32 MB of data generated. This is useful # in order to commit the file to the disk more incrementally and avoid # big latency spikes. aof-rewrite-incremental-fsync yes # Redis configuration file example Redis 配置文件示例 # Note on units: when memory size is needed, it is possible to specify # 注意单位:当需要设置内存大小的时候,可以使用常用的格式1k 5GB 4M等等: # it in the usual form of 1k 5GB 4M and so forth: # # 1k => 1000 bytes # 1kb => 1024 bytes # 1m => 1000000 bytes # 1mb => 1024*1024 bytes # 1g => 1000000000 bytes # 1gb => 1024*1024*1024 bytes # # units are case insensitive so 1GB 1Gb 1gB are all the same. # 单位大小写不敏感,因此1GB、1Gb、1gB都是一样的 ################################## INCLUDES ################################### # Include one or more other config files here. This is useful if you # have a standard template that goes to all Redis servers but also need # to customize a few per-server settings. Include files can include # other files, so use this wisely. # 这里可以引入一个或多个其他的配置文件。当你有一个标准的模板去运行所有的Redis # 服务,但是对每个服务有少量修改的时候,非常有用。Include files可以包含其他文件, # 因此应用很广泛。 # # Notice option "include" won't be rewritten by command "CONFIG REWRITE" # from admin or Redis Sentinel. Since Redis always uses the last processed # line as value of a configuration directive, you'd better put includes # at the beginning of this file to avoid overwriting config change at runtime. # # 注意“include”操作不能被来自admin或者Redis 守护的“CONFIG REWRITE”命令重写, # 由于Redis通常使用最后处理行,最好是把这个文件放在首行避免在运行时候重写 # If instead you are interested in using includes to override configuration # options, it is better to use include as the last line. # 如果你喜欢使用覆盖重写可以使用下面的写法 # # include /path/to/local.conf # include /path/to/other.conf ################################ GENERAL ##################################### # By default Redis does not run as a daemon. Use 'yes' if you need it. # Redis默认不是以守护进程的方式运行的,如果需要请设置为“yes”。 # Note that Redis will write a pid file in /var/run/redis.pid when daemonized. # 注意的是一旦使用守护方式运行将会输出进程id到一个pid文件 daemonize yes # When running daemonized, Redis writes a pid file in /var/run/redis.pid by # default. You can specify a custom pid file location here. # 当运行守护模式,Redis默认将pid写到文件var/run/redis.pid。我们可以自定义pid文件 pidfile /var/run/redis/redis.pid # Accept connections on the specified port, default is 6379. # If port 0 is specified Redis will not listen on a TCP socket. # 接受连接的自定义端口,默认为6379。 # 如果设置为0,Redis将不能通过TCP连接 port 6379 # TCP listen() backlog. # TCP 监听备份日志 # # In high requests-per-second environments you need an high backlog in order # to avoid slow clients connections issues. Note that the Linux kernel # will silently truncate it to the value of /proc/sys/net/core/somaxconn so # make sure to raise both the value of somaxconn and tcp_max_syn_backlog # in order to get the desired effect. # 在高并发请求环境需要设置高备份日志去避免客户端慢连接问题。注意 :Linux内核 # 会默认截取 /proc/sys/net/core/somaxconn 的值,因此需要提高somaxconn和 # tcp_max_syn_backlog的值以达到效果 tcp-backlog 511 # By default Redis listens for connections from all the network interfaces # available on the server. It is possible to listen to just one or multiple # interfaces using the "bind" configuration directive, followed by one or # more IP addresses. # 侦听服务器上网络的连接的默认接口。可以有一个或多个IP绑定到多个接口配置项, # # Examples: # 例子: # # bind 192.168.1.100 10.0.0.1 bind 127.0.0.1 # Specify the path for the Unix socket that will be used to listen for # incoming connections. There is no default, so Redis will not listen # on a unix socket when not specified. # 定义用于侦听传入连接的Unix套接字路径,这里没有默认值,因此当需要UNIX # 套接字连接时需要定义此项目 # # unixsocket /tmp/redis.sock # unixsocketperm 700 # Close the connection after a client is idle for N seconds (0 to disable) timeout 0 # 当客户端连接空闲N秒时关闭连接(0表示禁用,不关闭) # TCP keepalive. # TCP 长连接 # # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence # of communication. This is useful for two reasons: # # 如果非0,使用SO_KEEPALIVE给客户端发送TCP ACKS在缺乏连接的情况下: # # 1) Detect dead peers. # 2) Take the connection alive from the point of view of network # equipment in the middle. # 1)僵尸节点的检测 # 2)从中间网络设备的角度连接 # # On Linux, the specified value (in seconds) is the period used to send ACKs. # Note that to close the connection the double of the time is needed. # On other kernels the period depends on the kernel configuration. # 在Linux中,默认值(秒)一般事情发送ACKS。注意的是关闭连接需要两倍时间。 # 其他内核中处理期是依赖内核配置的 # # A reasonable value for this option is 60 seconds. # 一个合理的配置是 60秒 tcp-keepalive 0 # Specify the server verbosity level. # This can be one of: # debug (a lot of information, useful for development/testing) # verbose (many rarely useful info, but not a mess like the debug level) # notice (moderately verbose, what you want in production probably) # warning (only very important / critical messages are logged) # 定义服务器冗余等级 # 可以设置为以下几种 # debug:大量信息,用于开发或测试 # verbose:很多有用的信息,但是不像debug那么乱 # notice:可能需要的适量有用信息, # worning:只记录非常重要,至关重要的信息 loglevel notice # Specify the log file name. Also the empty string can be used to force # Redis to log on the standard output. Note that if you use standard # output for logging but daemonize, logs will be sent to /dev/null # 定义日志文件名称。也可以指控强制Redis使用标准输出。注意的是如果使用使用 # 标准输出而不是使用守护模式,日志将会被发送到 /dev/null logfile /var/log/redis/redis.log # To enable logging to the system logger, just set 'syslog-enabled' to yes, # and optionally update the other syslog parameters to suit your needs. # 打开日志记录到系统日志,将'syslog-enabled' 设置为 “yes”,可以根据需求自 # 行修改其他日志参数 # syslog-enabled no # Specify the syslog identity # 定义日志身份. # syslog-ident redis # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7. # 自定义syslog,必须是一各用户或者在 LOCAL0-LOCAL7中 # syslog-facility local0 # Set the number of databases. The default database is DB 0, you can select # a different one on a per-connection basis using SELECT <dbid> where # dbid is a number between 0 and 'databases'-1 # 设置数据库的数目,默认的数据库是DB 0,在每次连接时候可以通过 SELECT <dbid> # 选择不同的数据库。(dbid的范围在0和数据库数目-1之间) databases 16 ################################ SNAPSHOTTING ################################ # # Save the DB on disk: # 保存数据到磁盘: # # save <seconds> <changes> # # Will save the DB if both the given number of seconds and the given # number of write operations against the DB occurred. # 在符合给定参数的情况发生时,将数据写入磁盘 # # In the example below the behaviour will be to save: # 一些例子在下面情况下将会发生: # after 900 sec (15 min) if at least 1 key changed # 15分钟内至少一个key发生变化 # after 300 sec (5 min) if at least 10 keys changed # 300秒内 至少10个key发生变化 # after 60 sec if at least 10000 keys changed # 50秒内 至少1000个key发生变化 # # Note: you can disable saving completely by commenting out all "save" lines. # 注意:可以禁用完全保存,通过所有“save”行 # It is also possible to remove all the previously configured save # points by adding a save directive with a single empty string argument # like in the following example: # 可以清除之前的的配置保存点,通过一个单独的空参数,如: # # save "" save 900 1 save 300 10 save 60 10000 # By default Redis will stop accepting writes if RDB snapshots are enabled # (at least one save point) and the latest background save failed. # This will make the user aware (in a hard way) that data is not persisting # on disk properly, otherwise chances are that no one will notice and some # disaster will happen. # 默认将会停止接受写,如果RDB镜像可以用或者最近的后台保存错误 # 将很难将数据持久化到磁盘上,否则异常发生 # # If the background saving process will start working again Redis will # automatically allow writes again. # 如果后台进程开始工作 RDIS将会再次自动开始写入 # # However if you have setup your proper monitoring of the Redis server # and persistence, you may want to disable this feature so that Redis will # continue to work as usual even if there are problems with disk, # permissions, and so forth. # 然而当你设置一个服务器监听或者持久化时候,你会想禁用这些特性保障Redis和正常 # 一样工作,除非硬盘、权限等等发生问题 stop-writes-on-bgsave-error yes # Compress string objects using LZF when dump .rdb databases? # For default that's set to 'yes' as it's almost always a win. # If you want to save some CPU in the saving child set it to 'no' but # the dataset will likely be bigger if you have compressible values or keys. # 当dump rdb时候使用压缩:默认设置是yes,通常着用用户 # 如果想保存一些cpu在子保存设备,可以设置为no,但是数据会变得很多,用使用可以 # 压缩的值或key # rdbcompression yes # Since version 5 of RDB a CRC64 checksum is placed at the end of the file. # This makes the format more resistant to corruption but there is a performance # hit to pay (around 10%) when saving and loading RDB files, so you can disable it # for maximum performances. # # RDB files created with checksum disabled have a checksum of zero that will # tell the loading code to skip the check. # 从RDB 5开始对文件尾部做CRC64校验和 rdbchecksum yes # The filename where to dump the DB # dump数据库的名称 dbfilename dump.rdb # The working directory. # 工作目录 # # The DB will be written inside this directory, with the filename specified # above using the 'dbfilename' configuration directive. # 数据库将会被写在内部目 和定义的文件名都在配置项 # # The Append Only File will also be created inside this directory. # 在这个目录附加到一个文件 # Note that you must specify a directory here, not a file name. # 必须设置一个目录在这,而不是一个文件名 dir /var/lib/redis/ ################################# REPLICATION ################################# # Master-Slave replication. Use slaveof to make a Redis instance a copy of # another Redis server. A few things to understand ASAP about Redis replication. # 主从复制,使用slaveof使得一个Reids实例被复制到另一台Redis服务器。需要尽快了解 # 的一些关于Redis复制的事情: # # 1) Redis replication is asynchronous, but you can configure a master to # stop accepting writes if it appears to be not connected with at least # a given number of slaves. # 2) Redis slaves are able to perform a partial resynchronization with the # master if the replication link is lost for a relatively small amount of # time. You may want to configure the replication backlog size (see the next # sections of this file) with a sensible value depending on your needs. # 3) Replication is automatic and does not need user intervention. After a # network partition slaves automatically try to reconnect to masters # and resynchronize with them. # 1)Redis复制是异步进行的,但是可以配置一个主停止接受写当至少一个从库不能连接时 # 2)Redis从库能够在连接段时间断开时部分同步。你可以根据需求配置合适的复制文件 # 大小(看本文下一部分) # 3)在从库自动连接主库并同步完成,复制可以不要用户操作而自动执行, # # slaveof <masterip> <masterport> # If the master is password protected (using the "requirepass" configuration # directive below) it is possible to tell the slave to authenticate before # starting the replication synchronization process, otherwise the master will # refuse the slave request. # 如果主库的密码是受保护的(使用下面的 requirepass 配置项),可以先告诉从库授权 # 在开始同步之前,否则主库将会拒绝连接 # # masterauth <master-password> # When a slave loses its connection with the master, or when the replication # is still in progress, the slave can act in two different ways: # 当从库和主库断开连接,或者复制还在运行,从库可以通过两种不同方式进行 # # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will # still reply to client requests, possibly with out of date data, or the # data set may just be empty if this is the first synchronization. # # 2) if slave-serve-stale-data is set to 'no' the slave will reply with # an error "SYNC with master in progress" to all the kind of commands # but to INFO and SLAVEOF. # 1)如果 slave-serve-stale-data 设置为‘yes’(默认值),从库将会回复客户过时 # 的数据,或者第一次访问时候返回空 # 2)如果 slave-serve-stale-data 设置为‘no’从库将会给除了INFO和SLAVEOF之外的 # 所有命令返回"SYNC with master in progress" # # slave-serve-stale-data yes # You can configure a slave instance to accept writes or not. Writing against # a slave instance may be useful to store some ephemeral data (because data # written on a slave will be easily deleted after resync with the master) but # may also cause problems if clients are writing to it because of a # misconfiguration. # 可以配置从库实例是否接受写。禁止写实例可能用于存储一些临时的数据(数据从主库同步后 # 写在从库比删快)。由于配置错误 但是可能引起一些问题,假如客户端在写 # # Since Redis 2.6 by default slaves are read-only. # 从2.6起 默认是只读配置 # Note: read only slaves are not designed to be exposed to untrusted clients # on the internet. It's just a protection layer against misuse of the instance. # Still a read only slave exports by default all the administrative commands # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve # security of read only slaves using 'rename-command' to shadow all the # administrative / dangerous commands. # 注意:默认只读不能用于检测不被信任的连接。至少用于防止不当使用的一个实例。 # 导出一个只读从库需要管理命令 如CONFIG ,DEBUG等等,在有限的范围内,使用 # “重命名命令”来隐藏所有的 管理/危险 命令以提高只读的安全性, slave-read-only yes # Replication SYNC strategy: disk or socket. # 复制同步策略:磁盘或套接字 # # ------------------------------------------------------- # WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY # 警告:无盘复制当前还在实验阶段 # ------------------------------------------------------- # # New slaves and reconnecting slaves that are not able to continue the replication # process just receiving differences, need to do what is called a "full # synchronization". An RDB file is transmitted from the master to the slaves. # The transmission can happen in two different ways: # 新的从库或者重连接的从库 不能够继续进行复制只能收到不同信息,需要调用“full syncharonization” # 命令。一个RDB文件将会从主库传送到从库。这个传送有两种不同方式: # # # 1) Disk-backed: The Redis master creates a new process that writes the RDB # file on disk. Later the file is transferred by the parent # process to the slaves incrementally. # 2) Diskless: The Redis master creates a new process that directly writes the # RDB file to slave sockets, without touching the disk at all. # 1)磁盘备份:主库将会创建一个新的进程在磁盘上写入RDB文件,之后RDB文件将会由 # 由父进程递增地传入从库 # 2)无盘备份:主库创建一个新的进程直接将RDB文件写入从库套接字。不会在磁盘上写。 # # With disk-backed replication, while the RDB file is generated, more slaves # can be queued and served with the RDB file as soon as the current child producing # the RDB file finishes its work. With diskless replication instead once # the transfer starts, new slaves arriving will be queued and a new transfer # will start when the current one terminates. # 通过磁盘备份复制,当rdb文件被创建后,许多在队列中等待的RDB文件的从库将会尽快的处理完 # RDB文件。通过无盘复制可以代替一次传送的启动,新的从库到达队列后需要等到当前传输终止 # 才能够进行。 # # When diskless replication is used, the master waits a configurable amount of # time (in seconds) before starting the transfer in the hope that multiple slaves # will arrive and the transfer can be parallelized. # 使用无盘复制时,主库希望配置一个时间(单位:秒)在期待传输钱,以保障多从库到达时候传送 # 能够并行执行 # # With slow disks and fast (large bandwidth) networks, diskless replication # works better. # 使用慢磁盘和快(带宽大)网络时,无盘复制会更好 repl-diskless-sync no # When diskless replication is enabled, it is possible to configure the delay # the server waits in order to spawn the child that trnasfers the RDB via socket # to the slaves. # 当无盘复制被启用。可能需要配置延迟服务器等待以确保分出子进程去通过套接字将RDB传入从库。 # # This is important since once the transfer starts, it is not possible to serve # new slaves arriving, that will be queued for the next RDB transfer, so the server # waits a delay in order to let more slaves arrive. # 重要的是由于一旦传送开始,就不能够给新到达的从库提供服务,新的从库将会在队列中等待 #下次RDB传输, 因此服务器等待一个延迟以确保 让更多的从库到达 # # The delay is specified in seconds, and by default is 5 seconds. To disable # it entirely just set it to 0 seconds and the transfer will start ASAP. # 这个延迟以秒为单位,默认5s,禁用时只需要设置为0那么传送将会尽快开始 repl-diskless-sync-delay 5 # Slaves send PINGs to server in a predefined interval. It's possible to change # this interval with the repl_ping_slave_period option. The default value is 10 # seconds. # 从库在一个预定的时间间隔内给服务器服务器发生PINGs。能够通过修改 # repl_ping_slave_period调整时间间隔。默认10s # # repl-ping-slave-period 10 # The following option sets the replication timeout for: # 下面操作是设置传送超时的: # # 1) Bulk transfer I/O during SYNC, from the point of view of slave. # 2) Master timeout from the point of view of slaves (data, pings). # 3) Slave timeout from the point of view of masters (REPLCONF ACK pings). # 1)从从库的角度看带来的传送I/O在同步中 # 2)从从库看主库超时(数据,PINGs) # 3)在主库可以通过ACK看到从库超时 # # It is important to make sure that this value is greater than the value # specified for repl-ping-slave-period otherwise a timeout will be detected # every time there is low traffic between the master and the slave. # 需要确保 repl-timeout的值答应定义的 repl-ping-slave-period的值,否则将会 # 每次检测超时,主从之间的传输就会变得很慢 # # repl-timeout 60 # Disable TCP_NODELAY on the slave socket after SYNC? # 是否在同步后禁用从库套接字TCP_NODELAY # # If you select "yes" Redis will use a smaller number of TCP packets and # less bandwidth to send data to slaves. But this can add a delay for # the data to appear on the slave side, up to 40 milliseconds with # Linux kernels using a default configuration. # 如果设置为“yes”,Redis会占用一点点带宽发送少量的TCP包给从库。但是会对数据 # 在从库上出现产生一些延迟,多达40毫秒,取决于LINUX 内核的默认设置 # # If you select "no" the delay for data to appear on the slave side will # be reduced but more bandwidth will be used for replication. # 设置为“no”是将会降低延迟,但是会占用很多带宽。 # # By default we optimize for low latency, but in very high traffic conditions # or when the master and slaves are many hops away, turning this to "yes" may # be a good idea. # 默认设置优化为低延迟,但是在多连接或者主从之间有很多跳的时候,设置为yes会好些 repl-disable-tcp-nodelay no # Set the replication backlog size. The backlog is a buffer that accumulates # slave data when slaves are disconnected for some time, so that when a slave # wants to reconnect again, often a full resync is not needed, but a partial # resync is enough, just passing the portion of data the slave missed while # disconnected. # 设置复制备份文件大小。备份文件是当从库偶尔断开是的缓存积累,当一个从库重新连接后 # 经常不需要完全传送,只需要一个部分传送继续。只传送从库在断开连接时缺失的部分 # # The bigger the replication backlog, the longer the time the slave can be # disconnected and later be able to perform a partial resynchronization. # 大备份文件,长时间的备份会引起从库断开连接或者延迟,或者部分同步 # # The backlog is only allocated once there is at least a slave connected. # 至少有一个从库连接时需要分配backlog # # repl-backlog-size 1mb # After a master has no longer connected slaves for some time, the backlog # will be freed. The following option configures the amount of seconds that # need to elapse, starting from the time the last slave disconnected, for # the backlog buffer to be freed. # 主库不在被连接时,backlog将会被释放。下面的配置释放需要的延迟时间(秒): # 从最后一个从库断开到backlog缓存被释放 # # A value of 0 means to never release the backlog. # 0 表示不释放backlog:而不是立即释放 # repl-backlog-ttl 3600 # The slave priority is an integer number published by Redis in the INFO output. # It is used by Redis Sentinel in order to select a slave to promote into a # master if the master is no longer working correctly. # 从库的优先级是一个整型数( REDIS中用INFO 可以输出)。 # 使用REDIS守护确保选择一个从库当主库不在正常工作时。 # # A slave with a low priority number is considered better for promotion, so # for instance if there are three slaves with priority 10, 100, 25 Sentinel will # pick the one with priority 10, that is the lowest. # 数字越小优先级越高,因此一个实例中如果从库优先级有10,100,25,守护将会现在最 # 小的10 # # However a special priority of 0 marks the slave as not able to perform the # role of master, so a slave with priority of 0 will never be selected by # Redis Sentinel for promotion. # 但是特殊定义0,意味着从库不能扮演主库的角色,因此优先级设置为0的从库,不会被选中 # # By default the priority is 100. # 默认的优先级是100 slave-priority 100 # It is possible for a master to stop accepting writes if there are less than # N slaves connected, having a lag less or equal than M seconds. # 少于N个从库连接时候,主库可能需要停止写,需要设置一个少于或等于M秒的参数 # # The N slaves need to be in "online" state. # N个从库应该为“online”状态 # # The lag in seconds, that must be <= the specified value, is calculated from # the last ping received from the slave, that is usually sent every second. # 滞后的秒数必须小于或等于定义的值,是有最后的来自从库的ping值来计算的,通常 # ping每秒都会发送。 # This option does not GUARANTEE that N replicas will accept the write, but # will limit the window of exposure for lost writes in case not enough slaves # are available, to the specified number of seconds. # 这个操作不能保障 N个复制接受写,但是会限制暴露的窗口使用情况以防没有足够的从库 # 可用,去定义秒数 # For example to require at least 3 slaves with a lag <= 10 seconds use: # 假如需要3个从库在小于10秒的延迟后使用。 # min-slaves-to-write 3 # min-slaves-max-lag 10 # # Setting one or the other to 0 disables the feature. # 设置为1 或遏制另外一个设置为0会禁止特性 # # By default min-slaves-to-write is set to 0 (feature disabled) and # min-slaves-max-lag is set to 10. # 默认设置 min-slaves-to-write是0(禁用) min-slaves-max-lag是10 ################################## SECURITY ################################### # Require clients to issue AUTH <PASSWORD> before processing any other # commands. This might be useful in environments in which you do not trust # others with access to the host running redis-server. # 在客户端处理前需要验证AUTH。可以用于不是特别被信任的环境运行的redis-server # This should stay commented out for backward compatibility and because most # people do not need auth (e.g. they run their own servers). # 应该保持注释掉的向后兼容性,因为大多人不需要auth(运行自己的服务器) # # Warning: since Redis is pretty fast an outside user can try up to # 150k passwords per second against a good box. This means that you should # use a very strong password otherwise it will be very easy to break. # 警告:由于redis很快,外部用户可以从一个窗口以150k/s的速度发送密码来攻击。 # 必须设置一个非常健壮的密码,否则很容易被攻破。 # requirepass foobared # Command renaming. # 重命名命令 # # It is possible to change the name of dangerous commands in a shared # environment. For instance the CONFIG command may be renamed into something # hard to guess so that it will still be available for internal-use tools # but not available for general clients. # 在共享环境中有可能需要修改一些危险命令。一个实例中的CONFIG 命令可以被重命名为 # 一些难以猜的命令,因此在内部使用时候还是有效的,但是一般的客户端是无效的 # # Example: # 例如 # # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 # # It is also possible to completely kill a command by renaming it into # an empty string: # 也可以通过设置为空来关闭这个命令, # # rename-command CONFIG "" # # Please note that changing the name of commands that are logged into the # AOF file or transmitted to slaves may cause problems. # 请注意,修改命令的名称不能够记录到AOF文件或者传送到从服,否则会引起问题 ################################### LIMITS #################################### # Set the max number of connected clients at the same time. By default # this limit is set to 10000 clients, however if the Redis server is not # able to configure the process file limit to allow for the specified limit # the max number of allowed clients is set to the current file limit # minus 32 (as Redis reserves a few file descriptors for internal uses). # 设置最大客户端同一时间连接数。默认设置10000个连接,但是如果Redis服务器不能支持 # 处理这么多文件,应以允许指定的限制允许的客户端的最大数量被设置为当前文件限制 # 减去32(作为Redis的保留了一个一些文件描述符供内部使用)。 # # Once the limit is reached Redis will close all the new connections sending # an error 'max number of clients reached'. # 一旦连接数到达Redis将会不按包所有新的连接 并抛出“max number of clients reached”。 # # maxclients 10000 # Don't use more memory than the specified amount of bytes. # When the memory limit is reached Redis will try to remove keys # according to the eviction policy selected (see maxmemory-policy). # 不要使用多余定义位数的内存,当内存占用达到时候,Redis将会尝试根据回收规则删除keys # (参见:最大内存规则) # # If Redis can't remove keys according to the policy, or if the policy is # set to 'noeviction', Redis will start to reply with errors to commands # that would use more memory, like SET, LPUSH, and so on, and will continue # to reply to read-only commands like GET. # 如果Redis不能够根据规则移除keys,或者规则被设置为“noeviction”,如果要使用更多内存 #Redis将会在执行命令时候返回错误,像SET ,LPUSH等到呢个,可以继续执行只读命令如 GET # # # This option is usually useful when using Redis as an LRU cache, or to set # a hard memory limit for an instance (using the 'noeviction' policy). # 这个操作用于当REDIS作为LRU缓存时候,或者设置一个代内存限制一个实例 # # WARNING: If you have slaves attached to an instance with maxmemory on, # the size of the output buffers needed to feed the slaves are subtracted # from the used memory count,so that network problems / resyncs will # not trigger a loop where keys are evicted, and in turn the output # buffer of slaves is full with DELs of keys evicted triggering the deletion # of more keys, and so forth until the database is completely emptied. # 如果有从库依附于一个设置了maxmemory的实例,输出输出缓存的大小需要支持从何 # 警告:如果有从库访问一个maxmerory打开的实例,这个支持从库的输出缓存的大小需要从已使用 # 的内存数中减去,使得网络问题、重新同步不会在keys被移除时触发循环,依次输出的从库缓存被 # 已经释放key的删除占满,触发删除更多key。以此类推知道数据库被完全清空 # # In short... if you have slaves attached it is suggested that you set a lower # limit for maxmemory so that there is some free RAM on the system for slave # output buffers (but this is not needed if the policy is 'noeviction'). # 简而言之。如果有从库依附,建议设置一个小的现在给maxmemory.区别有空闲的内存留给,从库 # 缓存输出(如果规则是“noeviction” 将不需要 ) # # maxmemory <bytes> # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory # is reached. You can select among five behaviors: # MAXMEMORY 规则:redis是如何在最大内存到达时选择移除的,有5种方式: # # volatile-lru -> remove the key with an expire set using an LRU algorithm # allkeys-lru -> remove any key according to the LRU algorithm # volatile-random -> remove a random key with an expire set # allkeys-random -> remove a random key, any key # volatile-ttl -> remove the key with the nearest expire time (minor TTL) # noeviction -> don't expire at all, just return an error on write operations # # volatile-lru -> 删除过期key使用LRU计算 # allkeys-lru -> 删除所有key使用LRU计算 # volatile-random -> 根据过期规则删除 一个 key # allkeys-random -> 删除任意key # volatile-ttl -> 根据最近过期时间删除一个key(minor TTL) # noeviction -> 不过期,写的时候返回错误。 # Note: with any of the above policies, Redis will return an error on write # operations, when there are no suitable keys for eviction. # # At the date of writing these commands are: set setnx setex append # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby # getset mset msetnx exec sort # 注意: 使用以上热河规则时候,Redis都会在没有符合条件的释放key时发生写错误 # # 当前设置这些命令: set setnx setex append # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby # getset mset msetnx exec sort # # The default is: # 默认规则是: # # maxmemory-policy volatile-lru # LRU and minimal TTL algorithms are not precise algorithms but approximated # algorithms (in order to save memory), so you can select as well the sample # size to check. For instance for default Redis will check three keys and # pick the one that was used less recently, you can change the sample size # using the following configuration directive. # LRU和自动延迟计算不是精确的计算但是是准确计算(为了保存内存),因可以可以选择 # 一个样本中的大小检验。 对Redis默认实例将会检验3个keys选择一个最近很少使用的key, # 可以使用下面的配置项修改样本大小 # # maxmemory-samples 3 ############################## APPEND ONLY MODE ############################### # By default Redis asynchronously dumps the dataset on disk. This mode is # good enough in many applications, but an issue with the Redis process or # a power outage may result into a few minutes of writes lost (depending on # the configured save points). # 通过默认的REDIS异步转存数据到磁盘。这种模式对于很多应用来说已经很好,但是一个 # redis处理问题或者电源故障都会引起几分钟的写丢失(取决于:设置的保存点) # # The Append Only File is an alternative persistence mode that provides # much better durability. For instance using the default data fsync policy # (see later in the config file) Redis can lose just one second of writes in a # dramatic event like a server power outage, or a single write if something # wrong with the Redis process itself happens, but the operating system is # still running correctly. # AOF 方式是代替rdb的一个持久方式。实例中使用默认的数据同步方式(配置文件下面) # REDIS会在异常中丢失1秒的写(如电源故障,或者只写是REDIS进程自己发生异常), # 但是服务器正常运行。 # # AOF and RDB persistence can be enabled at the same time without problems. # If the AOF is enabled on startup Redis will load the AOF, that is the file # with the better durability guarantees. # AOF 和RDB 可以同时执行。如果AOF打开,启动REDIS时候会加载AOF,因为这个文件有好的 # 持久保障 # # Please check http://redis.io/topics/persistence for more information. # 了解更多内容,请参见: appendonly no # The name of the append only file (default: "appendonly.aof") # AOF 文件名称(默认:“appendonly.aof”) appendfilename "appendonly.aof" # The fsync() call tells the Operating System to actually write data on disk # instead of waiting for more data in the output buffer. Some OS will really flush # data on disk, some other OS will just try to do it ASAP. # fsysnc给操作系统发生指令将数据写入磁盘代替等待在数据缓存中的更多数据。一些系统会 # 刷新磁盘数据,也有一些很快处理完 。 # # Redis supports three different modes: # Redis支持有三种模式: # # no: don't fsync, just let the OS flush the data when it wants. Faster. # always: fsync after every write to the append only log. Slow, Safest. # everysec: fsync only one time every second. Compromise. # no:不同步写入。仅仅在需要的时候由操作系统刷新数据。比较快 # always: 一旦有写操作就同步写入文件。慢、最安全 # erverysec:每秒同步写一次。折中。 # # The default is "everysec", as that's usually the right compromise between # speed and data safety. It's up to you to understand if you can relax this to # "no" that will let the operating system flush the output buffer when # it wants, for better performances (but if you can live with the idea of # some data loss consider the default persistence mode that's snapshotting), # or on the contrary, use "always" that's very slow but a bit safer than # everysec. # 默认的方式是“ererysec”,速度和安全中是一个比较折中的合适选择。 取决你的理解 #程度,如果你想为了更好的用户体验,设置为“no”,让操作系统刷新数据输出缓存( 但是如你能接受一些数据快照中的数据丢失,) # # More details please check the following article: 、 # 详细内容参考下面文章: # http://antirez.com/post/redis-persistence-demystified.html # # If unsure, use "everysec". # 如果不确定,请使用“everysec” # appendfsync always appendfsync everysec # appendfsync no # When the AOF fsync policy is set to always or everysec, and a background # saving process (a background save or AOF log background rewriting) is # performing a lot of I/O against the disk, in some Linux configurations # Redis may block too long on the fsync() call. Note that there is no fix for # this currently, as even performing fsync in a different thread will block # our synchronous write(2) call. # 一旦AOF同步写规则本设置成always或者everysec,后台将其保留一个进程产生大量的 # 磁盘I/O,在一些Liunx配置中 REDIS可以阻断太长的rsync()调用。注意这里没有锁定当前 #,设置其他的fsync在一个不同的进程中会阻断同步写 # # # In order to mitigate this problem it's possible to use the following option # that will prevent fsync() from being called in the main process while a # BGSAVE or BGREWRITEAOF is in progress. # 为了减轻我们在使用中的问题。当“BGSAVE”或者“BGREWRITEAOF”运行时,防止fsync #在主进程中被调用 # # This means that while another child is saving, the durability of Redis is # the same as "appendfsync none". In practical terms, this means that it is # possible to lose up to 30 seconds of log in the worst scenario (with the # default Linux settings). # 当另外一个子进程在保存,Redis持久化是和"appendfsync none" 一样的。实际应用中 # 会产生丢失30秒(根据LINUX默认配置) # # If you have latency problems turn this to "yes". Otherwise leave it as # "no" that is the safest pick from the point of view of durability. # 如果有延迟问题 设置此项为“yes”。否则 设置为no将是最安全的持久化 no-appendfsync-on-rewrite no # Automatic rewrite of the append only file. # Redis is able to automatically rewrite the log file implicitly calling # BGREWRITEAOF when the AOF log size grows by the specified percentage. # 自动重写AOF文件 # 当AOF文件,增长到设置的百分比,REDIS能够调用BAREWRITFOF自动在后台重写文件 # # This is how it works: Redis remembers the size of the AOF file after the # latest rewrite (if no rewrite has happened since the restart, the size of # the AOF at startup is used). # 是这样工作的:REDIS在最后一次重写后记录AOF文件的大小(如果重启后没有重写,大小 # 就是AOF文件开始时的大小) # # This base size is compared to the current size. If the current size is # bigger than the specified percentage, the rewrite is triggered. Also # you need to specify a minimal size for the AOF file to be rewritten, this # is useful to avoid rewriting the AOF file even if the percentage increase # is reached but it is still pretty small. # 这个原始大小和当前大小比较,如果当前大小多出定义的百分比,重写将会被触发。 # 也可以给要重新的文件可以定义一个小的大小,可以避免当增长到定义的百分比时还很小 #而重写AOF # # Specify a percentage of zero in order to disable the automatic AOF # 设置百分比为0的时候禁用自动AOF # rewrite feature. # 重写特性 auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb # An AOF file may be found to be truncated at the end during the Redis # startup process, when the AOF data gets loaded back into memory. # This may happen when the system where Redis is running # crashes, especially when an ext4 filesystem is mounted without the # data=ordered option (however this can't happen when Redis itself # crashes or aborts but the operating system still works correctly). # AOF在REDIS开启时候数据被完全加入内存后,可以被清除。 发生在运行REDIS的 # 系统崩溃,特别是ext4文件系统在挂载时候使用data=ordered操作(然后这个不能发生 # 在REDIS自己崩溃或者终止但是系统依然运行时候) # # Redis can either exit with an error when this happens, or load as much # data as possible (the default now) and start if the AOF file is found # to be truncated at the end. The following option controls this behavior. # REDIS可以在异常发生,或者加载了最多文件时候启动而AOF被清空的情况下退出。下面 # 操作来控制这个行为 # # If aof-load-truncated is set to yes, a truncated AOF file is loaded and # the Redis server starts emitting a log to inform the user of the event. # Otherwise if the option is set to no, the server aborts with an error # and refuses to start. When the option is set to no, the user requires # to fix the AOF file using the "redis-check-aof" utility before to restart # the server. # 如果aof-load-truncated 被设置为yes,一个清空的AOF文件被加载,redis服务器开始 # 发送一个日志记录用户事件。否则被设置为no,服务器会异常退出拒绝启动。当此项 # 被设置为no,用户需要在重启服务器前确定"redis-check-aof" 有效 # # Note that if the AOF file will be found to be corrupted in the middle # the server will still exit with an error. This option only applies when # Redis will try to read more data from the AOF file but not enough bytes # will be found. # 注意如果AOF文件中间出现异常,服务器依然将会退出。这个操作紧紧出现在应用redis # 从AOF 文件中读取更多的数据,但是没有足够的空间时候 aof-load-truncated yes ################################ LUA SCRIPTING ############################### # Max execution time of a Lua script in milliseconds. # LUA脚本的最大执行时间:毫秒 # # If the maximum execution time is reached Redis will log that a script is # still in execution after the maximum allowed time and will start to # reply to queries with an error # 如果到达最大执行时间,REDIS将会记录在最大运行时间后的在执行脚本并返回查询异常 # # When a long running script exceeds the maximum execution time only the # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be # used to stop a script that did not yet called write commands. The second # is the only way to shut down the server in the case a write command was # already issued by the script but the user doesn't want to wait for the natural # termination of the script. # 当一个超长的脚本最大执行时间运行时只有 “SCRIPT KILL” 和“SHOWDOWN NOSAVE”命令 # 有效。第一个用于停止不再写, 第二个只是关闭服务,但是用户不想等已经确认的命令执行 # 完毕 # # # Set it to 0 or a negative value for unlimited execution without warnings. # 设置为0 或者负数是不限制执行 没有警告 lua-time-limit 5000 ################################## SLOW LOG ################################### # The Redis Slow Log is a system to log queries that exceeded a specified # execution time. The execution time does not include the I/O operations # like talking with the client, sending the reply and so forth, # but just the time needed to actually execute the command (this is the only # stage of command execution where the thread is blocked and can not serve # other requests in the meantime). # REDIS Slowlog 是记录执行超时时候的查询。这个执行时间不包含I/O操作告诉客户、 # 发送响应等等 # 至是需要实际上执行命令需要的时间(线程被阻断时执行命令的步骤,不同同时支持其他 # 服务) # # You can configure the slow log with two parameters: one tells Redis # what is the execution time, in microseconds, to exceed in order for the # command to get logged, and the other parameter is the length of the # slow log. When a new command is logged the oldest one is removed from the # queue of logged commands. # 可以给slow log定义两个参数: 一个告诉REDIS执行时间,微秒单位,使得超时命令写入 # 日志,另外一个参数记录日志的长度。当一个新的命令记录后,最后一个命令从队列中移除 # The following time is expressed in microseconds, so 1000000 is equivalent # to one second. Note that a negative number disables the slow log, while # a value of zero forces the logging of every command. # 时间使用微秒为单位如1000 000 为1秒。 注意负数会禁用slowlog,0表示实时记录 slowlog-log-slower-than 10000 # There is no limit to this length. Just be aware that it will consume memory. # You can reclaim memory used by the slow log with SLOWLOG RESET. # 这里没有现在长度,但是发现其会消耗内存,可以通过SLOWLOG RESET 回收被slow log占用 # 的内存 slowlog-max-len 128 ################################ LATENCY MONITOR 后台监控############################## # The Redis latency monitoring subsystem samples different operations # at runtime in order to collect data related to possible sources of # latency of a Redis instance. # REDIS运行时候, 后台子系统案例的不同操作 收集redis 实例的实例的数据 # # Via the LATENCY command this information is available to the user that can # print graphs and obtain reports. # 通过LATENCY命令给用户打出图像和获取报告 # # The system only logs operations that were performed in a time equal or # greater than the amount of milliseconds specified via the # latency-monitor-threshold configuration directive. When its value is set # to zero, the latency monitor is turned off. # 系统只记录执行等与或超过定的时间的操作(微秒),通过latency-monitor-threshold # 设置为0是关闭后台建库 # # By default latency monitoring is disabled since it is mostly not needed # if you don't have latency issues, and collecting data has a performance # impact, that while very small, can be measured under big load. Latency # monitoring can easily be enalbed at runtime using the command # "CONFIG SET latency-monitor-threshold <milliseconds>" if needed. # 默认清空下后台监控是关闭的,因为几乎不需要。如果你没有后台问题,收集数据有性能 # 影响 ,这个地方设置非常小。可以在用的大负载时候的评估检测。如果需要后台监控很容易 # 在运行时用命令设置(CONFIG SET latency-monitor-threshold <微秒>" ) latency-monitor-threshold 0 ############################# Event notification 注意事项############################## # Redis can notify Pub/Sub clients about events happening in the key space. # Redis 可以给Pub/Sub客户端发送定义key的范围的通知 # This feature is documented at http://redis.io/topics/notifications # 文件参考 # # For instance if keyspace events notification is enabled, and a client # performs a DEL operation on key "foo" stored in the Database 0, two # messages will be published via Pub/Sub: # 如果实例中的key空间事件通知开启一个客户端删除在db“foo”,PUB/Sub 将会发送两条信息。 # # PUBLISH __keyspace@0__:foo del # PUBLISH __keyevent@0__:del foo # # It is possible to select the events that Redis will notify among a set # of classes. Every class is identified by a single character: # 选择一个REDIS通知类型中的事件:所有类型都用一个字母定义 # # K Keyspace events, published with __keyspace@<db>__ prefix. # E Keyevent events, published with __keyevent@<db>__ prefix. # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... # $ String commands # l List commands # s Set commands # h Hash commands # z Sorted set commands # x Expired events (events generated every time a key expires) # e Evicted events (events generated when a key is evicted for maxmemory) # A Alias for g$lshzxe, so that the "AKE" string means all the events. # K key空间事件,烦死在 __keyspace@<db>__ prefix. # E key事件 __keyevent@<db>__ prefix. # g 没有特殊定义的通用命令DEL, EXPIRE, RENAME, ... # $ 字符串命令 # l 命令列表 # s 设置命令 # h 哈希命令 # z 有序设置命令 # x 过期事件(events generated every time a key expires) # e 销毁事件(events generated when a key is evicted for maxmemory) # A 别名 “g$lshzxe”, "AKE" 指所有事件 # # The "notify-keyspace-events" takes as argument a string that is composed # of zero or multiple characters. The empty string means that notifications # are disabled. # "notify-keyspace-events" 使用0个或多个字符做为参数,空参数表示不启用。 # # # Example: to enable list and generic events, from the point of view of the # event name, use: # 例子: 从事件名称的角度,打开列表和基本事件使用 # # notify-keyspace-events Elg # # Example 2: to get the stream of the expired keys subscribing to channel # name __keyevent@0__:expired use: # 例子2: 获取订阅的渠道的过期key的名称使用 # # notify-keyspace-events Ex # # By default all notifications are disabled because most users don't need # this feature and the feature has some overhead. Note that if you don't # specify at least one of K or E, no events will be delivered. # 由于大部分用户不需要使用这个特性,所以通知是被默认关闭的。注意如果使用一个 # ‘K’或 ‘E’的时候,不会有事件被发送 notify-keyspace-events "" ############################### ADVANCED CONFIG ############################### # Hashes are encoded using a memory efficient data structure when they have a # small number of entries, and the biggest entry does not exceed a given # threshold. These thresholds can be configured using the following directives. # 哈希是高效的内存数据结构编码方式,当有少量条目,大条目不超过给定域。这些域 # 可以用下面项来配置: hash-max-ziplist-entries 512 hash-max-ziplist-value 64 # Similarly to hashes, small lists are also encoded in a special way in order # to save a lot of space. The special representation is only used when # you are under the following limits: # 简单的哈希,小列表也会哈希为节省空间。特殊的表示仅用于你在下面的限制 list-max-ziplist-entries 512 list-max-ziplist-value 64 # Sets have a special encoding in just one case: when a set is composed # of just strings that happen to be integers in radix 10 in the range # of 64 bit signed integers. # The following configuration setting sets the limit in the size of the # set in order to use this special memory saving encoding. # 设置特定编码仅在下面这种情况:当一组由刚刚字符串恰好是在64位有符号整数的范围 # 整数基数10。 # 下面的设置限制制定内存的编码 set-max-intset-entries 512 # Similarly to hashes and lists, sorted sets are also specially encoded in # order to save a lot of space. This encoding is only used when the length and # elements of a sorted set are below the following limits: # 常见的哈希和列表、排序、sets 也会被特定编码。仅用于一个有序堆栈的元素做下面的限制 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 # HyperLogLog sparse representation bytes limit. The limit includes the # 16 bytes header. When an HyperLogLog using the sparse representation crosses # this limit, it is converted into the dense representation. # # A value greater than 16000 is totally useless, since at that point the # dense representation is more memory efficient. # # The suggested value is ~ 3000 in order to have the benefits of # the space efficient encoding without slowing down too much PFADD, # which is O(N) with the sparse encoding. The value can be raised to # ~ 10000 when CPU is not a concern, but space is, and the data set is # composed of many HyperLogLogs with cardinality in the 0 - 15000 range. hll-sparse-max-bytes 3000 # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in # order to help rehashing the main Redis hash table (the one mapping top-level # keys to values). The hash table implementation Redis uses (see dict.c) # performs a lazy rehashing: the more operation you run into a hash table # that is rehashing, the more rehashing "steps" are performed, so if the # server is idle the rehashing is never complete and some more memory is used # by the hash table. # # The default is to use this millisecond 10 times every second in order to # actively rehash the main dictionaries, freeing memory when possible. # # If unsure: # use "activerehashing no" if you have hard latency requirements and it is # not a good thing in your environment that Redis can reply from time to time # to queries with 2 milliseconds delay. # # use "activerehashing yes" if you don't have such hard requirements but # want to free memory asap when possible. activerehashing yes # The client output buffer limits can be used to force disconnection of clients # that are not reading data from the server fast enough for some reason (a # common reason is that a Pub/Sub client can't consume messages as fast as the # publisher can produce them). # 客户端输出缓存现在适用于强制断开客户端但 # # The limit can be set differently for the three different classes of clients: # # normal -> normal clients including MONITOR clients # slave -> slave clients # pubsub -> clients subscribed to at least one pubsub channel or pattern # # The syntax of every client-output-buffer-limit directive is the following: # # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds> # # A client is immediately disconnected once the hard limit is reached, or if # the soft limit is reached and remains reached for the specified number of # seconds (continuously). # So for instance if the hard limit is 32 megabytes and the soft limit is # 16 megabytes / 10 seconds, the client will get disconnected immediately # if the size of the output buffers reach 32 megabytes, but will also get # disconnected if the client reaches 16 megabytes and continuously overcomes # the limit for 10 seconds. # # By default normal clients are not limited because they don't receive data # without asking (in a push way), but just after a request, so only # asynchronous clients may create a scenario where data is requested faster # than it can read. # # Instead there is a default limit for pubsub and slave clients, since # subscribers and slaves receive data in a push fashion. # # Both the hard or the soft limit can be disabled by setting them to zero. client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 # Redis calls an internal function to perform many background tasks, like # closing connections of clients in timeout, purging expired keys that are # never requested, and so forth. # # Not all tasks are performed with the same frequency, but Redis checks for # tasks to perform according to the specified "hz" value. # # By default "hz" is set to 10. Raising the value will use more CPU when # Redis is idle, but at the same time will make Redis more responsive when # there are many keys expiring at the same time, and timeouts may be # handled with more precision. # # The range is between 1 and 500, however a value over 100 is usually not # a good idea. Most users should use the default of 10 and raise this up to # 100 only in environments where very low latency is required. hz 10 # When a child rewrites the AOF file, if the following option is enabled # the file will be fsync-ed every 32 MB of data generated. This is useful # in order to commit the file to the disk more incrementally and avoid # big latency spikes. aof-rewrite-incremental-fsync yes
相关推荐
它的配置文件是管理和优化Redis服务器运行的关键。在这个压缩包中,包含了两个主要的配置文件:`redis.conf` 和 `sentinel.conf`。 `redis.conf` 是Redis服务器的基础配置文件,它包含了启动、网络、数据持久化、...
这是redis的配置文件,里面大部份注释都翻译了中文,英文不好的朋友,请参考中文注释,结合自己的情况修改。 RedisWatcher 这是redis启动windows服务的软件,除了这个之外,还针对32位和64位不同版本的windows系统...
Redis 2.8 配置文件是管理Redis服务器的关键组件,它定义了服务器的各种行为、性能和安全性设置。以下是对配置文件中部分重要参数的详细解释: ### 包含其他配置文件 (INCLUDES) `INCLUDES` 部分允许你在主配置文件...
# 绑定主机 IP,根据实际情况进行配置,也可以在节点配置文件 redis.conf 中覆盖 bind 192.168.40.202 127.0.0.1 protected-mode yes tcp-backlog 511 timeout 0 tcp-keepalive 0 # 后台运行,对应后文节点配置文件 ...
`redis.conf-中文翻译.txt`则提供了`redis.conf`配置文件的中文解释,有助于理解并正确配置Redis服务。 综上所述,这个压缩包提供了Redis的安装包、桌面管理工具、学习资料以及配置文件的中文翻译,是一套完整的...
解压Redis-x64-3.2.100.zip后,用户将得到包含可执行文件和其他必要文件的目录结构,包括`redis-server.exe`(服务器)、`redis-cli.exe`(命令行客户端)以及配置文件`redis.windows.conf`等。在运行Redis之前,...
启动Redis服务器、配置文件的修改以及如何设置守护进程模式都是初学者必须掌握的基础知识。 三、命令操作 手册中涵盖了所有Redis命令的详细解释,如`SET`用于设置键值,`GET`用于获取键值,`INCR`用于对整数值进行...
5. **数据导入导出**:能够方便地将Redis中的数据导出为文件,或者从文件导入数据到Redis,这对于备份、迁移或数据交换非常有用。 6. **实时监控**:可能包含性能指标的实时监控,如内存使用情况、命令执行速率等,...
在Spring Boot框架中,`application.properties`或`application.yml`是核心配置文件,它们定义了应用的环境变量和自定义设置。这些配置文件使得开发者能够快速启动和运行一个基于Spring的应用,无需大量的XML配置...
在使用Redis Desktop Manager 2019.0.0时,用户需要注意的是,尽管它提供了许多便利,但仍然需要一定的Redis基础知识,例如理解各种数据类型和命令,以及如何正确配置Redis服务器。同时,为了确保数据安全,不建议在...
6. KVStore for Redis配置:用户可以通过设置网络类型、重置操作、重启操作等来配置KVStore for Redis,需要注意一些操作可能导致系统重大变更甚至故障,或者导致人身伤害等结果。 7. 命令行操作:用户可以通过执行...
1. 云数据库Redis版技术白皮书简介:阿里云专有云Enterprise版云数据库Redis版技术白皮书V3.1.0是阿里云提供的一份技术文档,旨在帮助用户了解云数据库Redis版的使用和配置。 2. 法律声明:阿里云提醒用户在阅读或...
声明式配置则可以通过 XML 文件或 Properties 文件来配置。 Redisson 还提供了分布式对象、分布式集合、分布式锁和同步器等功能。分布式对象可以在多个Redis 节点之间共享,分布式集合可以在多个 Redis 节点之间...
9. `redis配置文件`:可能包含连接Redis的配置信息。 这个项目的学习价值在于,它将带你了解Django的MVT(Model-View-Template)设计模式,理解如何定义模型来存储数据,编写视图处理HTTP请求并返回响应,以及使用...
Nine AI.ChatGPT是基于...首先启动服务端进入service 创建.env文件 在其中修改 测试数据库信息和redis 配置完成后 pnpm dev 数据库通过orm映射 启动项目会自动创建数据库 启动完成后可以打开chat admin pnpm dev启动
配置文件可以设定语言包、时区和翻译策略,以适应全球用户的需求。 10. **错误和日志**:框架允许自定义错误处理和日志记录。在配置文件中,我们可以设置错误级别、错误报告策略以及日志写入方式,如文件、数据库或...
2. 未经阿里云事先书面许可,任何单位、公司或个人不得擅自摘抄、翻译、复制本文档内容的部分或全部,不得以任何方式或途径进行传播和宣传。 3. 由于产品版本升级、调整或其他原因,本文档内容有可能变更。阿里云...
Nine AI.ChatGPT是基于...首先启动服务端进入service 创建.env文件 在其中修改 测试数据库信息和redis 配置完成后 pnpm dev 数据库通过orm映射 启动项目会自动创建数据库 启动完成后可以打开chat admin pnpm dev启动
Nine AI.ChatGPT是基于...首先启动服务端进入service 创建.env文件 在其中修改 测试数据库信息和redis 配置完成后 pnpm dev 数据库通过orm映射 启动项目会自动创建数据库 启动完成后可以打开chat admin pnpm dev启动