- 浏览: 105934 次
- 性别:
- 来自: 深圳
文章分类
最新评论
-
沉醉音乐的咖啡:
引用
Yii框架中ActiveRecord使用Relations -
stevecj:
呵呵,这个以后再交流。
盛大开始行动了,值得尊敬 -
庄表伟:
谢谢鼓励!我们会更加努力的。能聊聊你想做的另一个新产品是什么吗 ...
盛大开始行动了,值得尊敬
环境 RHEL5.3
软件
FastDFS_v1.28.tar.gz
libevent-1.4.9-stable.tar.gz
php
php-devel
#测试目的
节点开启自运行
接待你提供http下载
准备:
跟踪器 tracker ; 存储器 storage; 客户端 client
跟踪器安装fastdfs
存储器安装fastdfs
client安装fastdfs php的fastdfs扩展
=================================================================
一 下载安装fastdfs
1#要想支持web,得需要配置libevent
#安装libevent
cd /opt
wget http://down1.chinaunix.net/distfiles/libevent-1.4.9-stable.tar.gz
tar zxvf libevent-1.4.9-stable.tar.gz
cd libevent-1.4.9-stable
./configure --prefix=/usr
make;make install
#安装fastdfs
cd /opt
wget http://fastdfs.googlecode.com/files/FastDFS_v1.28.tar.gz
tar FastDFS_v1.28.tar.gz
cd FastDFS
#编译make.sh 以主持web和开机自动脚本
vi make.sh #去掉这两行的注视
...
WITH_HTTPD=1 #支持web,client可以不开启这个功能
WITH_LINUX_SERVICE=1 #支持开机脚本
...
# 编译
./make.sh && ./make.sh install
============================================================================
2 、修改配置文件
FastDFS 分为三种角色(跟踪器 tracker ; 存储器 storage; 客户端 client),三种角色的配置文件时不相同的,需要注意的是,三种角色都需要编译安装fastdfs,即执行前三步,作为client是可以不打开WITH_HTTPD=1这个功能的。当编译安装完fastdfs后,自动会在/etc/fdfs下面生成client.conf http.conf mime.types storage.conf tracker.conf 这些文件的,只要我们稍微改动一下就可以了。
[root@hdfs-1 FastDFS]# cat /etc/fdfs/tracker.conf
# is this config file disabled
# false for enabled
# true for disabled
disabled=false
# bind an address of this host
# empty for bind all addresses of this host
bind_addr=
# the tracker server port
port=22122
# network timeout in seconds
network_timeout=20
# the base path to store data and log files
base_path=/install/fastdfs
# max concurrent connections this server supported
# max_connections worker threads start when this service startup
max_connections=256
# the method of selecting group to upload files
# 0: round robin
# 1: specify group
# 2: load balance, select the max free space group to upload file
store_lookup=0
# which group to upload file
# when store_lookup set to 1, must set store_group to the group name
store_group=group1
# which storage server to upload file
# 0: round robin (default)
# 1: the first server order by ip address
# 2: the first server order by priority (the minimal)
store_server=0
# which path(means disk or mount point) of the storage server to upload file
# 0: round robin
# 2: load balance, select the max free space path to upload file
store_path=0
# which storage server to download file
# 0: round robin (default)
# 1: the source storage server which the current file uploaded to
download_server=0
# reserved storage space for system or other applications.
# if the free(available) space of any stoarge server in
# a group <= reserved_storage_space,
# no file can be uploaded to this group.
# bytes unit can be one of follows:
### G or g for gigabyte(GB)
### M or m for megabyte(MB)
### K or k for kilobyte(KB)
### no unit for byte(B)
reserved_storage_space = 4GB
#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info
#unix group name to run this program,
#not set (empty) means run by the group of current user
run_by_group=
#unix username to run this program,
#not set (empty) means run by current user
run_by_user=
# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or
# host[01-08,20-25].domain.com, for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
allow_hosts=*
#allow_hosts=192.168.2.[1-255]
# sync log buff to disk every interval seconds
# default value is 10 seconds
sync_log_buff_interval = 10
# check storage server alive interval seconds
check_active_interval = 120
# thread stack size, should >= 64KB
# default value is 64KB
thread_stack_size=1MB
# auto adjust when the ip address of the storage server changed
# default value is true
storage_ip_changed_auto_adjust=true
# HTTP settings
http.disabled=false
# HTTP port on this tracker server
http.server_port=8080
# check storage HTTP server alive interval seconds
# <= 0 for never check
# default value is 30
http.check_alive_interval=30
# check storage HTTP server alive type, values are:
# tcp : connect to the storge server with HTTP port only,
# do not request and get response
# http: storage check alive url must return http status 200
# default value is tcp
http.check_alive_type=tcp
# check storage HTTP server alive uri/url
# NOTE: storage embed HTTP server support uri: /status.html
http.check_alive_uri=/status.html
#use "#include" directive to include http other settiongs
#include http.conf
#注意这个#include http.conf 前面多了一个#号要去掉
============================================================================
存储节点
hdfs-2,hdfs-4,hdfs-5的配置文件都是一致的
[root@hdfs-2 ~]# cat /etc/fdfs/storage.conf
# is this config file disabled
# false for enabled
# true for disabled
disabled=false
# the name of the group this storage server belongs to
group_name=group1
# bind an address of this host
# empty for bind all addresses of this host
bind_addr=
# if bind an address of this host when connect to other servers
# (this storage server as a client)
# true for binding the address configed by above parameter: "bind_addr"
# false for binding any address of this host
client_bind=true
# the storage server port
port=23000
# network timeout in seconds
network_timeout=60
# heart beat interval in seconds
heart_beat_interval=10
# disk usage report interval in seconds
stat_report_interval=60
# the base path to store data and log files
base_path=/install/fastdfs
# max concurrent connections server supported
# max_connections worker threads start when this service startup
max_connections=256
# when no entry to sync, try read binlog again after X milliseconds
# 0 for try again immediately (not need to wait)
sync_wait_msec=200
# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
sync_interval=0
# sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_start_time=00:00
# sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_end_time=23:59
# write to the mark file after sync N files
# default value is 500
write_mark_file_freq=500
# path(disk or mount point) count, default value is 1
store_path_count=1
# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
store_path0=/install/fastdfs1
#store_path1=/home/yuqing/fastdfs2
# subdir_count * subdir_count directories will be auto created under each
# store_path (disk), value can be 1 to 256, default value is 256
subdir_count_per_path=256
# tracker_server can ocur more than once, and tracker_server format is
# "host:port", host can be hostname or ip address
tracker_server=192.168.2.200:22122
#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info
#unix group name to run this program,
#not set (empty) means run by the group of current user
run_by_group=
#unix username to run this program,
#not set (empty) means run by current user
run_by_user=
# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or
# host[01-08,20-25].domain.com, for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
allow_hosts=*
# the mode of the files distributed to the data path
# 0: round robin(default)
# 1: random, distributted by hash code
file_distribute_path_mode=0
# valid when file_distribute_to_path is set to 0 (round robin),
# when the written file count reaches this number, then rotate to next path
# default value is 100
file_distribute_rotate_count=100
# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
fsync_after_written_bytes=0
# sync log buff to disk every interval seconds
# default value is 10 seconds
sync_log_buff_interval=10
# sync binlog buff / cache to disk every interval seconds
# this parameter is valid when write_to_binlog set to 1
# default value is 60 seconds
sync_binlog_buff_interval=60
# sync storage stat info to disk every interval seconds
# default value is 300 seconds
sync_stat_file_interval=300
# thread stack size, should >= 512KB
# default value is 512KB
thread_stack_size=1M
# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
upload_priority=10
# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
if_alias_prefix=
# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
check_file_duplicate=0
# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
key_namespace=FastDFS
# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
keep_alive=0
# you can use "#include filename" (not include double quotes) directive to
# load FastDHT server list, when the filename is a relative path such as
# pure filename, the base path is the base path of current/this config file.
# must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
##include /home/yuqing/fastdht/conf/fdht_servers.conf
#HTTP settings
http.disabled=false
# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
http.domain_name=
# the port of the web server on this storage server
http.server_port=8888
http.trunk_size=256KB
#use "#include" directive to include HTTP other settiongs
#include http.conf
#去掉前面这行多余的#
============================================================================
3、#启动程序
#tracker节点:
[root@hdfs-1 ~]# mkdir /install/fastdfs/
[root@hdfs-1 ~]# /etc/init.d/fdfs_trackerd start
Starting FastDFS tracker server:
#存储节点
[root@hdfs-2 ~]# mkdir /install/fastdfs/
[root@hdfs-2 ~]# mkdir /install/fastdfs1/ #真正存放数据的地方!
[root@hdfs-2 ~]# /etc/init.d/fdfs_storaged start
Starting FastDFS storage server:
#初始化数据信息
注意:这个是数据节点的真是数据文件 目录有256x256个
[root@hdfs-2 ~]# ls /install/fastdfs1/data/
Display all 256 possibilities? (y or n)
00/ 07/ 0E/ 15/ 1C/ 23/ 2A/ 31/ 38/ 3F/ 46/ 4D/ 54/ 5B/ 62/ 69/ 70/ 77/ 7E/ 85/ 8C/ 93/ 9A/ A1/ A8/ AF/ B6/ BD/ C4/ CB/ D2/ D9/ E0/ E7/ EE/ F5/ FC/
01/ 08/ 0F/ 16/ 1D/ 24/ 2B/ 32/ 39/ 40/ 47/ 4E/ 55/ 5C/ 63/ 6A/ 71/ 78/ 7F/ 86/ 8D/ 94/ 9B/ A2/ A9/ B0/ B7/ BE/ C5/ CC/ D3/ DA/ E1/ E8/ EF/ F6/ FD/
02/ 09/ 10/ 17/ 1E/ 25/ 2C/ 33/ 3A/ 41/ 48/ 4F/ 56/ 5D/ 64/ 6B/ 72/ 79/ 80/ 87/ 8E/ 95/ 9C/ A3/ AA/ B1/ B8/ BF/ C6/ CD/ D4/ DB/ E2/ E9/ F0/ F7/ FE/
03/ 0A/ 11/ 18/ 1F/ 26/ 2D/ 34/ 3B/ 42/ 49/ 50/ 57/ 5E/ 65/ 6C/ 73/ 7A/ 81/ 88/ 8F/ 96/ 9D/ A4/ AB/ B2/ B9/ C0/ C7/ CE/ D5/ DC/ E3/ EA/ F1/ F8/ FF/
04/ 0B/ 12/ 19/ 20/ 27/ 2E/ 35/ 3C/ 43/ 4A/ 51/ 58/ 5F/ 66/ 6D/ 74/ 7B/ 82/ 89/ 90/ 97/ 9E/ A5/ AC/ B3/ BA/ C1/ C8/ CF/ D6/ DD/ E4/ EB/ F2/ F9/
05/ 0C/ 13/ 1A/ 21/ 28/ 2F/ 36/ 3D/ 44/ 4B/ 52/ 59/ 60/ 67/ 6E/ 75/ 7C/ 83/ 8A/ 91/ 98/ 9F/ A6/ AD/ B4/ BB/ C2/ C9/ D0/ D7/ DE/ E5/ EC/ F3/ FA/
06/ 0D/ 14/ 1B/ 22/ 29/ 30/ 37/ 3E/ 45/ 4C/ 53/ 5A/ 61/ 68/ 6F/ 76/ 7D/ 84/ 8B/ 92/ 99/ A0/ A7/ AE/ B5/ BC/ C3/ CA/ D1/ D8/ DF/ E6/ ED/ F4/ FB/
#检查进程和端口
[root@hdfs-1 ~]# netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22122 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN
...
[root@hdfs-1 ~]# ps -aux | grep fdfs
Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.7/FAQ
root 4071 0.0 0.3 286952 1840 ? Sl 07:05 0:00 /usr/local/bin/fdfs_trackerd /etc/fdfs/tracker.conf
[root@hdfs-2 ~]# netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:23000 0.0.0.0:* LISTEN
[root@hdfs-2 ~]# ps -aux | grep fdfs
Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.7/FAQ
root 4188 0.0 0.3 279124 2000 ? Sl 03:42 0:00 /usr/local/bin/fdfs_storaged /etc/fdfs/storage.conf
root 4452 0.0 0.1 4116 612 pts/1 R+ 03:43 0:00 grep fdfs
======================================================================
4、#上传文件,http下载文件
[root@hdfs-2 ~]# cd /opt/
[root@hdfs-2 opt]# touch dfstest20100705.txt
[root@hdfs-2 opt]# echo 123456789 > dfstest20100705.txt
[root@hdfs-2 opt]# /usr/local/bin/fdfs_test /etc/fdfs/storage.conf upload /opt/dfstest20100705.txt
This is FastDFS client test program v1.28
Copyright (C) 2008, Happy Fish / YuQing
FastDFS may be copied only under the terms of the GNU General
Public License V3, which may be found in the FastDFS source kit.
Please visit the FastDFS Home Page http://www.csource.org/
for more detail.
base_path=/install/fastdfs, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0
group_name=group1, ip_addr=192.168.2.201, port=23000
storage_upload_by_filename
group_name=group1, remote_filename=M00/00/00/wKgCyUwq3vAAAAAAAAAACg6ulXQ871.txt
source ip address: 192.168.2.201
file timestamp=2010-06-30 14:06:40
file size=10
file url: http://192.168.2.200/group1/M00/00/00/wKgCyUwq3vAAAAAAAAAACg6ulXQ871.txt
storage_upload_slave_by_filename
group_name=group1, remote_filename=M00/00/00/wKgCyUwq3vAAAAAAAAAACg6ulXQ871_big.txt
source ip address: 192.168.2.201
file timestamp=2010-06-30 14:06:40
file size=10
file url: http://192.168.2.200/group1/M00/00/00/wKgCyUwq3vAAAAAAAAAACg6ulXQ871_big.txt
#由于tracker节点配置的端口是8080端口所以如果要下载则要使用:8080
IE敲http://192.168.2.200:8080/group1/M00/00/00/wKgCyUwq3vAAAAAAAAAACg6ulXQ871_big.txt
就可以看到了!。
#如果没有这个文件则提示
|
Method Not Implemented
Invalid method in request
|
======================================================================
5、#看看各个节点的信息:
#tracker节点
[root@hdfs-1 ~]# tree /install/fastdfs/
/install/fastdfs/
|-- data
| |-- storage_changelog.dat
| |-- storage_groups.dat
| |-- storage_servers.dat
| `-- storage_sync_timestamp.dat
`-- logs
`-- trackerd.log
2 directories, 5 files
[root@hdfs-1 ~]# cat /install/fastdfs/data/storage_changelog.dat
[root@hdfs-1 ~]# cat /install/fastdfs/data/storage_groups.dat
group1,23000,1,256
[root@hdfs-1 ~]# cat /install/fastdfs/data/storage_servers.dat
group1,192.168.2.201,7,,0,24,24,8,8,24,24,16,16,8,8,1277816744,0,0
group1,192.168.2.203,7,192.168.2.201,1277803496,0,0,0,0,0,0,0,0,0,0,0,1278551697,0
group1,192.168.2.204,1,192.168.2.201,1277803553,0,0,0,0,0,0,0,0,0,0,0,0,0
[root@hdfs-1 ~]# cat /install/fastdfs/data/storage_sync_timestamp.dat
group1,192.168.2.201,0
#tracker节点启动日志
[root@hdfs-1 ~]# cat /install/fastdfs/logs/trackerd.log
[2010-06-29 17:18:36] INFO - FastDFS v1.28, base_path=/install/fastdfs, network_timeout=20s, port=22122, bind_addr=, max_connections=256, store_lookup=0, store_group=, store_server=0, store_path=0, reserved_storage_space=4096MB, download_server=0, allow_ip_count=-1, sync_log_buff_interval=10s, check_active_interval=120s, thread_stack_size=1024 KB, storage_ip_changed_auto_adjust=1
[2010-06-29 17:18:36] INFO - HTTP supported: server_port=8080, default_content_type=application/octet-stream, anti_steal_token=0, token_ttl=0s, anti_steal_secret_key length=0, token_check_fail content_type=, token_check_fail buff length=0, check_active_interval=30, check_active_type=tcp, check_active_uri=/status.html
------------------------------------------------------------------------
#存储节点目录树
[root@hdfs-2 fastdfs]# tree
.
|-- data
| |-- storage_stat.dat
| `-- sync
| |-- 192.168.2.203_23000.mark
| |-- 192.168.2.204_23000.mark
| |-- binlog.000
| `-- binlog.index
`-- logs
`-- storaged.log
3 directories, 6 files
#配置
[root@hdfs-2 fastdfs]# cat /install/fastdfs/data/storage_stat.dat
total_upload_count=26
success_upload_count=26
total_download_count=16
success_download_count=16
last_source_update=1277878000
last_sync_update=0
total_set_meta_count=8
success_set_meta_count=8
total_delete_count=24
success_delete_count=24
total_get_meta_count=8
success_get_meta_count=8
total_create_link_count=0
success_create_link_count=0
total_delete_link_count=0
success_delete_link_count=0
dist_path_index_high=0
dist_path_index_low=0
dist_write_file_count=13
#联络各个节点的同步
[root@hdfs-2 fastdfs]# cat /install/fastdfs/data/sync/192.168.2.203_23000.mark
binlog_index=0
binlog_offset=4612
need_sync_old=1
sync_old_done=1
until_timestamp=1277803496
scan_row_count=76
sync_row_count=76
[root@hdfs-2 fastdfs]# cat /install/fastdfs/data/sync/192.168.2.204_23000.mark
binlog_index=0
binlog_offset=4612
need_sync_old=1
sync_old_done=1
until_timestamp=1277803553
scan_row_count=76
sync_row_count=76
#元数据
[root@hdfs-2 fastdfs]# cat /install/fastdfs/data/sync/binlog.000
...
1277878000 C M00/00/00/wKgCyUwq3vAAAAAAAAAACg6ulXQ871.txt
1277878000 C M00/00/00/wKgCyUwq3vAAAAAAAAAACg6ulXQ871.txt-m
1277878000 C M00/00/00/wKgCyUwq3vAAAAAAAAAACg6ulXQ871_big.txt
1277878000 C M00/00/00/wKgCyUwq3vAAAAAAAAAACg6ulXQ871_big.txt-m
#这四行就是刚才上传的原文件和备份文件
[root@hdfs-2 fastdfs]# cat /install/fastdfs/data/sync/binlog.index
0
[root@hdfs-2 fastdfs]# cat /install/fastdfs/logs/storaged.log
[2010-06-30 13:53:48] INFO - FastDFS v1.28, base_path=/install/fastdfs, store_path_count=1, subdir_count_per_path=256, group_name=group1, network_timeout=60s, port=23000, bind_addr=, client_bind=1, max_connections=256, heart_beat_interval=10s, stat_report_interval=60s, tracker_server_count=1, sync_wait_msec=200ms, sync_interval=0ms, sync_start_time=00:00, sync_end_time: 23:59, write_mark_file_freq=500, allow_ip_count=-1, file_distribute_path_mode=0, file_distribute_rotate_count=100, fsync_after_written_bytes=0, sync_log_buff_interval=10s, sync_binlog_buff_interval=60s, sync_stat_file_interval=300s, thread_stack_size=1024 KB, upload_priority=10, if_alias_prefix=, check_file_duplicate=0, FDHT group count=0, FDHT server count=0, FDHT key_namespace=, FDHT keep_alive=0, HTTP server port=8888, domain name=
[2010-06-30 13:53:48] INFO - HTTP supported: server_port=8888, http_trunk_size=262144, default_content_type=application/octet-stream, anti_steal_token=0, token_ttl=0s, anti_steal_secret_key length=0, token_check_fail content_type=, token_check_fail buff length=0
[2010-06-30 13:59:58] INFO - file: storage_sync.c, line: 2029, successfully connect to storage server 192.168.2.203:23000
[2010-06-30 14:00:08] INFO - file: storage_sync.c, line: 2029, successfully connect to storage server 192.168.2.203:23000
[2010-06-30 14:00:58] INFO - file: storage_sync.c, line: 2029, successfully connect to storage server 192.168.2.204:23000
[2010-06-30 14:01:08] INFO - file: storage_sync.c, line: 2029, successfully connect to storage server 192.168.2.204:23000
#存储节点数据文件
[root@hdfs-2 fastdfs]# ls -l /install/fastdfs1/data/00/00/
total 16
-rw-r--r-- 1 root root 10 Jun 30 14:06 wKgCyUwq3vAAAAAAAAAACg6ulXQ871_big.txt
-rw-r--r-- 1 root root 49 Jun 30 14:06 wKgCyUwq3vAAAAAAAAAACg6ulXQ871_big.txt-m
-rw-r--r-- 1 root root 10 Jun 30 14:06 wKgCyUwq3vAAAAAAAAAACg6ulXQ871.txt
-rw-r--r-- 1 root root 49 Jun 30 14:06 wKgCyUwq3vAAAAAAAAAACg6ulXQ871.txt-m
==================================================================
6、#使用监控程序查看各个节点
[root@hdfs-5 fdfs]# /usr/local/bin/fdfs_monitor /etc/fdfs/storage.conf
base_path=/install/fastdfs, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0
tracker server is 192.168.2.200:22122
group count: 1
Group 1:
group name = group1
free space = 6 GB
storage server count = 3
active server count = 3
storage_port = 23000
storage_http_port = 8888
store path count = 1
subdir count per path= 256
current write server index = 1
Host 1:
ip_addr = 192.168.2.201 (hdfs-2) ACTIVE
http domain =
version = 1.28
up time = 2010-06-30 13:35:13
total storage = 7GB
free storage = 6GB
upload priority = 10
source ip_addr =
total_upload_count = 26
success_upload_count = 26
total_set_meta_count = 8
success_set_meta_count = 8
total_delete_count = 24
success_delete_count = 24
total_download_count = 16
success_download_count = 16
total_get_meta_count = 8
success_get_meta_count = 8
total_create_link_count = 0
success_create_link_count = 0
total_delete_link_count = 0
success_delete_link_count = 0
last_heart_beat_time = 2010-06-29 17:48:39
last_source_update = 2010-06-30 14:06:40
last_sync_update = 1970-01-01 08:00:00
last_synced_timestamp= 1970-01-01 08:00:00
Host 2:
ip_addr = 192.168.2.203 (hdfs-4) ACTIVE
http domain =
version = 1.28
up time = 2010-07-08 09:14:37
total storage = 7GB
free storage = 6GB
upload priority = 10
source ip_addr = 192.168.2.201
total_upload_count = 0
success_upload_count = 0
total_set_meta_count = 0
success_set_meta_count = 0
total_delete_count = 0
success_delete_count = 0
total_download_count = 0
success_download_count = 0
total_get_meta_count = 0
success_get_meta_count = 0
total_create_link_count = 0
success_create_link_count = 0
total_delete_link_count = 0
success_delete_link_count = 0
last_heart_beat_time = 2010-06-29 17:48:37
last_source_update = 1970-01-01 08:00:00
last_sync_update = 2010-07-08 09:22:16
last_synced_timestamp= 2010-06-30 14:06:40
Host 3:
ip_addr = 192.168.2.204 (hdfs-5) ACTIVE
http domain =
version = 1.28
up time = 2010-07-08 09:15:17
total storage = 7GB
free storage = 6GB
upload priority = 10
source ip_addr = 192.168.2.201
total_upload_count = 0
success_upload_count = 0
total_set_meta_count = 0
success_set_meta_count = 0
total_delete_count = 0
success_delete_count = 0
total_download_count = 0
success_download_count = 0
total_get_meta_count = 0
success_get_meta_count = 0
total_create_link_count = 0
success_create_link_count = 0
total_delete_link_count = 0
success_delete_link_count = 0
last_heart_beat_time = 2010-06-29 17:48:34
last_source_update = 1970-01-01 08:00:00
last_sync_update = 2010-07-08 09:21:57
last_synced_timestamp= 2010-06-30 14:06:40
相关推荐
本章节详细介绍了如何在Linux CentOS 6.5环境下搭建FastDFS环境,包括Tracker和Storage的安装配置过程。 ##### 2.1 环境要求 - **操作系统**:Linux CentOS 6.5 - **用户**:root用户登录 - **服务组件**:安装1个...
在Linux环境下搭建FastDFS+Nginx服务器是一项常见的任务,尤其对于那些需要稳定、高效存储和分发大量静态文件(如图片、文档等)的Web应用来说。FastDFS是一款开源的高性能分布式文件系统,而Nginx则是一款强大的...
1. **配置FastDFS服务器**:包括设置tracker服务器和storage服务器的地址,这通常是通过Java代码或者配置文件(如`client.conf`)来完成的。 2. **连接FastDFS**:使用FastDFS提供的Java API进行连接,建立与服务器...
<artifactId>fastdfs-client</artifactId> <version>1.26.7</version> </dependency> ``` #### 启动类配置 接下来,在 Spring Boot 的启动类中通过 `@Import` 注解引入 FastDFS 的配置类,并使用 `@...
Map<String, Object> result = new HashMap<>(); try { FastDFSClient client = new FastDFSClient("classpath:conf/client.conf"); String fileName = uploadFile.getOriginalFilename(); // ...省略后续代码....
<artifactId>fastdfs-client-java</artifactId> <version>1.27.4</version> <scope>compile</scope> </dependency> ``` 接下来,配置FastDFS的连接参数。在SpringBoot的配置文件`application.yml`或`application...
FastDFS设计的主要目标是简化文件服务器的搭建和管理,它将文件存储与HTTP服务分离,提高了文件访问的效率。FastDFS采用主从结构,支持负载均衡和故障切换,能够有效地处理大量小文件的存储和访问,特别适合图片、...
FastDFS 搭建过程 FastDFS 是一个开源的高性能分布式文件系统(DFS),它的主要功能包括文件存储、文件同步和文件访问,以及高容量和负载平衡。主要解决了海量数据存储问题,特别适合以中小文件(建议范围:4KB < ...
<artifactId>fastdfs-client-java</artifactId> <version>1.27.1</version> </dependency> ``` 3. **配置文件**:在 `application.yml` 文件中,配置 FastDFS 连接信息,包括 tracker 服务器的 IP 和端口,以及...
使用这个客户端示例,开发者可以学习到如何在Windows环境下搭建FastDFS客户端,包括配置环境、连接FastDFS服务器、实现文件的上传和下载功能等。这对于理解FastDFS的工作机制,以及在实际项目中集成和使用FastDFS都...
1. **环境准备**:确保系统环境满足FastDFS的运行需求,通常需要Linux操作系统、GCC编译器和Nginx(可选,作为HTTP访问接口)。 2. **安装依赖**:在Ubuntu或CentOS等Linux发行版上,通常需要安装libevent、...
对于开发环境,`fastdfs_client-1.24.jar`的使用需要配合FastDFS服务器环境,确保服务器端的FastDFS已经正确安装和配置。同时,开发者需要在项目中引入这个JAR包,配置相关的连接参数(如Tracker服务器地址、端口等...
为了提高服务的可用性和性能,FastDFS常常与Nginx结合,Nginx作为反向代理和负载均衡器,可以实现HTTP和FTP服务。FastDFS的Nginx模块能帮助处理文件访问,确保在同组的Storage之间同步完成前避免错误访问。在实验...
4. 配置环境:配置FastDFS的配置文件,如`/etc/fdfs/trackercfg.conf`和`/etc/fdfs/storercfg.conf`,设置服务器IP、端口、组名等相关参数。 5. 初始化数据目录:创建Storage服务器的数据目录,并设置合适的权限。 ...
### FastDFS 搭建详解 #### 一、FastDFS简介 FastDFS 是一款用 C 语言编写的开源分布式文件系统。它专为互联网场景设计,具有以下特点: - **冗余备份**:确保文件安全,即使部分节点失效也不会导致数据丢失。 - **...
1. **环境准备**:确保你的系统支持FastDFS,通常Linux发行版如CentOS、Ubuntu都是不错的选择。安装必要的依赖,如gcc编译器、libevent库等。 2. **源码编译**:下载FastDFS的源码包,解压后使用`./make.sh`进行...
此外,FastDFS还提供了客户端(Client)接口,使得应用能够方便地进行文件上传和下载操作。 搭建FastDFS文件服务器需要以下软件: 1. FastDFS源码包:这是FastDFS的核心组件,包含了服务器端和客户端的源代码,...
Map<String, String> filterChainDefinitionMap = new LinkedHashMap<>(); filterChainDefinitionMap.put("/logout", "logout"); filterChainDefinitionMap.put("/login", "anon"); filterChainDefinitionMap....
FastDFS 的整体架构主要包括三个角色:Tracker、Storage 和 Client。 1. Tracker:存储文件元数据信息,协调数据一致性,接收用户请求。Tracker 服务器是 FastDFS 的核心组件,负责存储文件的元数据信息,并协调...