- 浏览: 35848 次
- 性别:
- 来自: 沈阳
最新评论
不多说,直接上代码
注意事项与Tracker端一样,
# is this config file disabled # false for enabled # true for disabled disabled=false #当前配置文件是否不可用false可用,true不可用 # the name of the group this storage server belongs to group_name=group1 #当前storage服务器处于哪个组group1组名 # bind an address of this host # empty for bind all addresses of this host bind_addr= #绑定哪个IP地址,一个服务器上会有多个网卡,多个地址,具体使用哪个 # if bind an address of this host when connect to other servers # (this storage server as a client) # true for binding the address configed by above parameter: "bind_addr" # false for binding any address of this host client_bind=true #bind_addr绑定IP后此参数才有用 #本storage server作为client连接其他服务器(如tracker server、其他storage server),是否绑定bind_addr。 # the storage server port port=23000 #storage server的端口 # connect timeout in seconds # default value is 30s connect_timeout=30 #连接超时时间,单位秒 # network timeout in seconds # default value is 30s network_timeout=60 #网络超时时间,单位秒 #发送或接收数据时,如果在超时时间后还不能发送或接收数据,则本次网络通信失败。 # heart beat interval in seconds heart_beat_interval=30 #心跳包发送周期,单位秒 # disk usage report interval in seconds stat_report_interval=60 #storage server向tracker server报告磁盘剩余空间的时间间隔,单位为秒。 # the base path to store data and log files base_path=/home/fdfs #文件及日志保存路径 # max concurrent connections the server supported # default value is 256 # more max_connections means more memory will be used max_connections=256 #最大连接数 # the buff size to recv / send data # this parameter must more than 8KB # default value is 64KB # since V2.00 buff_size = 256KB #缓冲区大小 # work thread count, should <= max_connections # work thread deal network io # default value is 4 # since V2.00 work_threads=4 #工作线程数 # if disk read / write separated ## false for mixed read and write ## true for separated read and write # default value is true # since V2.00 disk_rw_separated = true #磁盘IO读写是否分离,缺省是分离的。 # disk reader thread count per store base path # for mixed read / write, this parameter can be 0 # default value is 1 # since V2.00 disk_reader_threads = 1 #针对单个存储路径的读线程数,缺省值为1。 #读写分离时,系统中的读线程数 = disk_reader_threads * store_path_count #读写混合时,系统中的读写线程数 = (disk_reader_threads + disk_writer_threads) * store_path_count # disk writer thread count per store base path # for mixed read / write, this parameter can be 0 # default value is 1 # since V2.00 disk_writer_threads = 1 #针对单个存储路径的写线程数,缺省值为1。 #读写分离时,系统中的写线程数 = disk_writer_threads * store_path_count #读写混合时,系统中的读写线程数 = (disk_reader_threads + disk_writer_threads) * store_path_count # when no entry to sync, try read binlog again after X milliseconds # must > 0, default value is 200ms sync_wait_msec=50 #同步文件时,如果从binlog中没有读到要同步的文件,休眠N毫秒后重新读取。0表示不休眠,立即再次尝试读取。 #出于CPU消耗考虑,不建议设置为0。如何希望同步尽可能快一些,可以将本参数设置得小一些,比如设置为10ms # after sync a file, usleep milliseconds # 0 for sync successively (never call usleep) sync_interval=0 #同步上一个文件后,再同步下一个文件的时间间隔,单位为毫秒,0表示不休眠,直接同步下一个文件。 # storage sync start time of a day, time format: Hour:Minute # Hour from 0 to 23, Minute from 0 to 59 sync_start_time=00:00 #同步开始时间 # storage sync end time of a day, time format: Hour:Minute # Hour from 0 to 23, Minute from 0 to 59 sync_end_time=23:59 #同步结束时间 # write to the mark file after sync N files # default value is 500 write_mark_file_freq=500 #同步完N个文件后,把storage的mark文件同步到磁盘 # path(disk or mount point) count, default value is 1 store_path_count=1 #存放文件时storage server支持多个路径(例如磁盘)。这里配置存放文件的基路径数目,通常只配一个目录。 # store_path#, based 0, if store_path0 not exists, it's value is base_path # the paths must be exist store_path0=/home/fdfs #store_path1=/home/yuqing/fastdfs2 #逐一配置store_path个路径,索引号基于0。注意配置方法后面有0,1,2 ......,需要配置0到store_path - 1。 #如果不配置base_path0,那边它就和base_path对应的路径一样。 # subdir_count * subdir_count directories will be auto created under each # store_path (disk), value can be 1 to 256, default value is 256 subdir_count_per_path=256 #FastDFS存储文件时,采用了两级目录。这里配置存放文件的目录个数 (系统的存储机制,大家看看文件存储的目录就知道了) #如果本参数只为N(如:256),那么storage server在初次运行时,会自动创建 N * N 个存放文件的子目录。 # tracker_server can ocur more than once, and tracker_server format is # "host:port", host can be hostname or ip address tracker_server=192.168.1.201:22122 #tracker_server 的列表 #有多个tracker server时,每个tracker server写一行 #standard log level as syslog, case insensitive, value list: ### emerg for emergency ### alert ### crit for critical ### error ### warn for warning ### notice ### info ### debug log_level=info #系统日志级别 #unix group name to run this program, #not set (empty) means run by the group of current user run_by_group= #使用那个系统用户组运行FastDFS,默认为启动线程的用户组 #unix username to run this program, #not set (empty) means run by current user run_by_user= #使用那个系统用户运行FastDFS。默认为启动线程的用户 # allow_hosts can ocur more than once, host can be hostname or ip address, # "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or # host[01-08,20-25].domain.com, for example: # allow_hosts=10.0.1.[1-15,20] # allow_hosts=host[01-08,20-25].domain.com allow_hosts=* #设置可以连接当前storage server的IP范围,包括client和store_server # the mode of the files distributed to the data path # 0: round robin(default) # 1: random, distributted by hash code file_distribute_path_mode=0 #文件在data目录下分散存储策略。 #0: 轮流存放,在一个目录下存储设置的文件数后(参数file_distribute_rotate_count中设置文件数),使用下一个目录进行存储。 #1: 随机存储,根据文件名对应的hash code来分散存储。 # valid when file_distribute_to_path is set to 0 (round robin), # when the written file count reaches this number, then rotate to next path # default value is 100 file_distribute_rotate_count=100 # 当上面的参数file_distribute_path_mode配置为0(轮流存放方式)时,本参数有效。 # 当一个目录下的文件存放的文件数达到本参数值时,后续上传的文件存储到下一个目录中。 # call fsync to disk when write big file # 0: never call fsync # other: call fsync when written bytes >= this bytes # default value is 0 (never call fsync) fsync_after_written_bytes=0 #当写入大文件时,每写入N个字节,调用一次系统函数fsync将内容强行同步到硬盘。0表示从不调用fsync # sync log buff to disk every interval seconds # must > 0, default value is 10 seconds sync_log_buff_interval=10 #同步或刷新日志信息到硬盘的时间间隔,单位为秒 #注意:storage server 的日志信息不是时时写硬盘的,而是先写内存。 # sync binlog buff / cache to disk every interval seconds # default value is 60 seconds sync_binlog_buff_interval=10 #同步binglog(更新操作日志)到硬盘的时间间隔,单位为秒 #本参数会影响新上传文件同步延迟时间 # sync storage stat info to disk every interval seconds # default value is 300 seconds sync_stat_file_interval=300 #把storage的stat文件同步到磁盘的时间间隔,单位为秒。 #注:如果stat文件内容没有变化,不会进行同步 # thread stack size, should >= 512KB # default value is 512KB thread_stack_size=512KB #线程栈的大小。FastDFS server端采用了线程方式。 # the priority as a source server for uploading file. # the lower this value, the higher its uploading priority. # default value is 10 upload_priority=10 #本storage server作为源服务器,上传文件的优先级,可以为负数。值越小,优先级越高。这里就和 tracker.conf 中store_server= 2时的配置相对应了 # the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a # multi aliases split by comma. empty value means auto set by OS type # default values is empty if_alias_prefix= # if check file duplicate, when set to true, use FastDHT to store file indexes # 1 or yes: need check # 0 or no: do not check # default value is 0 check_file_duplicate=0 #是否检测上传文件已经存在。如果已经存在,则不存在文件内容,建立一个符号链接以节省磁盘空间。 #这个应用要配合FastDHT 使用,所以打开前要先安装FastDHT #1或yes 是检测,0或no 是不检测 # file signature method for check file duplicate ## hash: four 32 bits hash code ## md5: MD5 signature # default value is hash # since V4.01 file_signature_method=hash #文件去重时,文件内容的签名方式: ## hash: 4个hash code ## md5:MD5 # namespace for storing file indexes (key-value pairs) # this item must be set when check_file_duplicate is true / on key_namespace=FastDFS #当上个参数设定为1 或 yes时 (true/on也是可以的) , 在FastDHT中的命名空间。 # set keep_alive to 1 to enable persistent connection with FastDHT servers # default value is 0 (short connection) keep_alive=0 #与FastDHT servers 的连接方式 (是否为持久连接) ,默认是0(短连接方式)。可以考虑使用长连接,这要看FastDHT server的连接数是否够用。 # you can use "#include filename" (not include double quotes) directive to # load FastDHT server list, when the filename is a relative path such as # pure filename, the base path is the base path of current/this config file. # must set FastDHT server list when check_file_duplicate is true / on # please see INSTALL of FastDHT for detail ##include /home/yuqing/fastdht/conf/fdht_servers.conf # if log to access log # default value is false # since V4.00 use_access_log = false #是否将文件操作记录到access log # if rotate the access log every day # default value is false # since V4.00 rotate_access_log = false #是否定期轮转access log,目前仅支持一天轮转一次 # rotate access log time base, time format: Hour:Minute # Hour from 0 to 23, Minute from 0 to 59 # default value is 00:00 # since V4.00 access_log_rotate_time=00:00 #access log定期轮转的时间点,只有当rotate_access_log设置为true时有效 # if rotate the error log every day # default value is false # since V4.02 rotate_error_log = false #是否定期轮转error log,目前仅支持一天轮转一次 # rotate error log time base, time format: Hour:Minute # Hour from 0 to 23, Minute from 0 to 59 # default value is 00:00 # since V4.02 error_log_rotate_time=00:00 #error log定期轮转的时间点,只有当rotate_error_log设置为true时有效 # rotate access log when the log file exceeds this size # 0 means never rotates log file by log file size # default value is 0 # since V4.02 rotate_access_log_size = 0 # access log按文件大小轮转 # 设置为0表示不按文件大小轮转,否则当access log达到该大小,就会轮转到新文件中 # rotate error log when the log file exceeds this size # 0 means never rotates log file by log file size # default value is 0 # since V4.02 rotate_error_log_size = 0 # error log按文件大小轮转 # 设置为0表示不按文件大小轮转,否则当error log达到该大小,就会轮转到新文件中 # if skip the invalid record when sync file # default value is false # since V4.02 file_sync_skip_invalid_record=false # 文件同步的时候,是否忽略无效的binlog记录 # if use connection pool # default value is false # since V4.05 use_connection_pool = false # connections whose the idle time exceeds this time will be closed # unit: second # default value is 3600 # since V4.05 connection_pool_max_idle_time = 3600 # use the ip address of this storage server if domain_name is empty, # else this domain name will ocur in the url redirected by the tracker server http.domain_name= # the port of the web server on this storage server http.server_port=8888
注意事项与Tracker端一样,
- storage.zip (5.1 KB)
- 下载次数: 6
相关推荐
《FastDFS Tracker端配置文件详解及启动注意事项》 FastDFS是一款开源的高性能、轻量级的分布式文件系统,主要用于解决大容量存储和负载均衡的问题。它由Tracker服务器和Storage服务器两部分组成,Tracker负责调度...
#### 三、配置注意事项 - **存储目录**:`base_path`和`store_path0`必须事先创建,并确保有足够的磁盘空间。 - **Tracker Server配置**:`base_path`应指向FastDFS的根目录,确保所有配置文件和日志文件的正确路径...
使用FastDFS客户端时,需要在项目的配置文件中指定FastDFS的服务器地址、端口以及连接超时时间等相关参数。同时,针对大规模并发场景,可以通过调整连接池大小、设置合理的超时时间等方式优化性能。 6. **注意事项...
注意事项 在部署FastDFS过程中,可能会遇到如下的问题: 1. 网络连接问题:确保服务器间可以正常通信。 2. 权限问题:检查文件夹权限,确保FastDFS进程可以读写数据。 3. 配置错误:仔细检查配置文件,避免因配置...
7. **注意事项** - 断点续传需要客户端和服务器端配合,因此,服务器端也需要支持断点续传的功能。 - 文件元数据的存储和恢复状态的管理是关键,需要确保其准确无误。 - 网络不稳定可能导致断点续传失败,需要有...
3. 配置FastDFS服务器:在服务器端,你需要安装并配置FastDFS,包括设置tracker服务器和storage服务器的IP和端口,以及相关配置文件如`conf/fastdfs.conf`的正确配置。 四、使用方法 在Java代码中,你可以通过...
《FastDFS客户端Java版1.25详解及应用实践》 FastDFS是一款开源的高性能、轻量级的分布式文件系统,广泛应用于互联网行业的文件存储与管理。本文将深入探讨其Java客户端Fastdfs-client-java 1.25版本的使用方法、...
3. **配置跟踪器(Tracker)**:配置文件中的`base_path`参数需要指向tracker的数据和日志目录,以便于后期迁移。同时,需要对`use_storage_id`和`id_type_in_filename`参数进行修改,并在`storage_ids_filename`...
**注意事项**: 如果启动服务时出现`Failed to start fdfs_trackerd.service: Unit fdfs_trackerd.service not found`错误,通常是因为服务没有正确注册或系统未找到服务单元文件。解决方法是关闭虚拟机,将网络连接...