`
Kevin12
  • 浏览: 235335 次
  • 性别: Icon_minigender_1
  • 来自: 上海
社区版块
存档分类
最新评论

Flume的安装和测试故障转移

阅读更多
1.实现功能
配置Flume监控本地文件夹变化,将变化的文件上传到hdfs上。
2.集群规划(3台机器都需要安装)


3.软件准备
下载软件包:http://flume.apache.org/download.html 选择当前最新版本:apache-flume-1.6.0-bin.tar.gz
并将其上传到虚拟机的/usr/local/flume目录下,如果没有创建目录;
运行命令:root@master1:/usr/local/flume# tar -zxvf apache-flume-1.6.0-bin.tar.gz解压;

4.配置环境变量(3台机器环境变量配置一样)
编辑.bashrc文件,添加下面的内容:
export FLUME_HOME=/usr/local/flume/apache-flume-1.6.0-bin 
export FLUME_CONF_DIR=${FLUME_HOME}/conf 
export PATH=.:${JAVA_HOME}/bin:${SCALA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${SPARK_HOME}/bin:${ZOOKEEPER_HOME}/bin:${HIVE_HOME}/bin:${FLUME_HOME}/bin:$PATH

运行root@master1:/usr/local/flume# source ~/.bashrc命令使之生效;
5.各个节点配置
master节点上配置/conf/flume-client.properties(source的日志收集)
#agent1 name  
agent1.channels = c1  
agent1.sources = r1  
agent1.sinks = k1 k2  
   
#set gruop  
agent1.sinkgroups = g1   
   
#set channel  
agent1.channels.c1.type = memory  
agent1.channels.c1.capacity = 1000  
agent1.channels.c1.transactionCapacity = 100  
   
agent1.sources.r1.channels = c1  
agent1.sources.r1.type = spooldir  
agent1.sources.r1.spoolDir =/usr/local/flume/tmp/TestDir  
   
agent1.sources.r1.interceptors = i1 i2  
agent1.sources.r1.interceptors.i1.type = static  
agent1.sources.r1.interceptors.i1.key = Type  
agent1.sources.r1.interceptors.i1.value = LOGIN  
agent1.sources.r1.interceptors.i2.type = timestamp  
   
# set sink1  
agent1.sinks.k1.channel = c1  
agent1.sinks.k1.type = avro  
agent1.sinks.k1.hostname = worker1  
agent1.sinks.k1.port = 52020  
   
# set sink2  
agent1.sinks.k2.channel = c1  
agent1.sinks.k2.type = avro  
agent1.sinks.k2.hostname = worker2  
agent1.sinks.k2.port = 52020  
   
#set sink group  
agent1.sinkgroups.g1.sinks = k1 k2  
   
#set failover  
agent1.sinkgroups.g1.processor.type = failover  
agent1.sinkgroups.g1.processor.priority.k1 = 10  
agent1.sinkgroups.g1.processor.priority.k2 = 1  
agent1.sinkgroups.g1.processor.maxpenalty = 10000  


worker1、worker2节点上配置/conf/flume-servre.properties
worker1上:
 
#set Agent name  
a1.sources = r1  
a1.channels = c1  
a1.sinks = k1  
   
#set channel  
a1.channels.c1.type = memory  
a1.channels.c1.capacity = 1000  
a1.channels.c1.transactionCapacity = 100  
   
# other node,nna to nns  
a1.sources.r1.type = avro  
a1.sources.r1.bind = worker1  
a1.sources.r1.port = 52020  
a1.sources.r1.interceptors = i1  
a1.sources.r1.interceptors.i1.type = static  
a1.sources.r1.interceptors.i1.key = Collector  
a1.sources.r1.interceptors.i1.value = worker1  
a1.sources.r1.channels = c1  
   
#set sink to hdfs  
a1.sinks.k1.type=hdfs  
a1.sinks.k1.hdfs.path=/library/flume  
a1.sinks.k1.hdfs.fileType=DataStream  
a1.sinks.k1.hdfs.writeFormat=TEXT  
a1.sinks.k1.hdfs.rollInterval=1  
a1.sinks.k1.channel=c1  
a1.sinks.k1.hdfs.filePrefix=%Y-%m-%d 


worker2上:
#set Agent name  
a1.sources = r1  
a1.channels = c1  
a1.sinks = k1  
   
#set channel  
a1.channels.c1.type = memory  
a1.channels.c1.capacity = 1000  
a1.channels.c1.transactionCapacity = 100  
   
# other node,nna to nns  
a1.sources.r1.type = avro  
a1.sources.r1.bind = worker2  
a1.sources.r1.port = 52020  
a1.sources.r1.interceptors = i1  
a1.sources.r1.interceptors.i1.type = static  
a1.sources.r1.interceptors.i1.key = Collector  
a1.sources.r1.interceptors.i1.value = worker2  
a1.sources.r1.channels = c1  
#set sink to hdfs  
a1.sinks.k1.type=hdfs  
a1.sinks.k1.hdfs.path=/library/flume  
a1.sinks.k1.hdfs.fileType=DataStream  
a1.sinks.k1.hdfs.writeFormat=TEXT  
a1.sinks.k1.hdfs.rollInterval=1  
a1.sinks.k1.channel=c1  
a1.sinks.k1.hdfs.filePrefix=%Y-%m-%d  


6.启动Flume集群(必须先启动Hadoop集群)
a)先启动collector端,即worker1和worker2上的配置文件:
root@worker1:/usr/local/flume/apache-flume-1.6.0-bin/conf# flume-ng agent -n a1 -c conf -f flume-server.properties -Dflume.root.logger=DEBUG,console  
root@worker2:/usr/local/flume/apache-flume-1.6.0-bin/conf# flume-ng agent -n a1 -c conf -f flume-server.properties -Dflume.root.logger=DEBUG,console 


b)再启动agent端,即master1的配置文件: 
root@master1:/usr/local/flume/apache-flume-1.6.0-bin/conf# flume-ng agent -n agent1 -c conf -f flume-client.properties -Dflume.root.logger=DEBUG,console
 

注意因为在master1上配置了agent1.sources.r1.spoolDir =/usr/local/flume/tmp/TestDir 所以在启动master1上的agent前必须先确保该目录存在,如果不存在会报错; 在worker1和worker2上我们虽然配置了a1.sinks.k1.hdfs.path=/library/flume ,但是可以不用事先创建,如果/usr/local/flume/tmp/TestDir目录下有文件变化时,flume会自动在hdfs上创建/library/flume目录。
集群启动后,查看各自的进程:Flume的进程名字叫做“Application”



7.测试数据的传输
在master1节点指定的路径下,生成新的文件(模拟web Server产生新的日志文件):
首先,TestDir下没有文件(但是会有一个.flumespool的隐藏文件夹,以英文句号开头的文件是隐藏文件):
root@master1:/usr/local/flume/tmp/TestDir# ll 
总用量 12 
drwxr-xr-x 3 root root 4096  6月 16 21:16 ./ 
drwxr-xr-x 3 root root 4096  6月 16 21:16 ../ 
drwxr-xr-x 2 root root 4096  6月 16 21:16 .flumespool/ 
root@master1:/usr/local/flume/tmp/TestDir# 

而HDFS指定的路径下也没有文件!

我们在/usr/local/flume/tmp目录下创建一个test.log文件,内容如下:
This is a test file ! 
This is a test file ! 
This is a test file ! 
Spark Hadoop JAVA Scala 
SPark Spark 
Hadoop 

之后,拷贝test.log文件到此目录下:
root@master1:/usr/local/flume/tmp# cp test.log ./TestDir/ 

可以从master1上的控制台可以看到下面的日志信息:

此时,看worker1上的控制台打印的日志如下:

最后,查看HDFS上的生成的文件的内容:


测试同时上传两个文件到TestDir目录中。
创建test_1.log 内容如下:
Hadoop Hadoop Hadoop
Java Java Java
Hive Hive Hive
再创建test_2.log 内容如下:
Scala Scala Scala
Spark Spark Spark
然后将两个文件同时拷贝到/usr/local/flume/tmp/TestDir目录中:
root@master1:/usr/local/flume/tmp# cp test_* ./TestDir/
查看master1的控制台:

查看worker1的控制台:

我们看到,源端的2个文件合并到HDFS上的1个文件中了。
再看源端的文件名称:

并且文件名后面都加了.completed后缀。
注意:如果/usr/local/flume/tmp/TestDir目录存在了一个test.log的文件,你再次拷贝test.log文件到TestDir目录下,则flume会报错,错误信息如下:
16/06/16 22:17:57 INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.  
16/06/16 22:17:57 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/flume/tmp/TestDir/test.log to /usr/local/flume/tmp/TestDir/test.log.COMPLETED  
16/06/16 22:17:57 ERROR source.SpoolDirectorySource: FATAL: Spool Directory source r1: { spoolDir: /usr/local/flume/tmp/TestDir }: Uncaught exception in SpoolDirectorySource thread. Restart or reconfigure Flume to continue processing.  
java.lang.IllegalStateException: File name has been re-used with different files. Spooling assumptions violated for /usr/local/flume/tmp/TestDir/test.log.COMPLETED  
    at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile(ReliableSpoolingFileEventReader.java:378)  
    at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.retireCurrentFile(ReliableSpoolingFileEventReader.java:330)  
    at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:259)  
    at org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:228)  
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)  
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)  
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)  
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)  
    at java.lang.Thread.run(Thread.java:745)  


此时,必须删除TestDir目录中的test.log文件,再重启master1上的agent才行;
但是flume也会将你本次拷贝到TestDir目录中的文件上传到hdfs上面,测试如下:
root@master1:/usr/local/flume/tmp# vim test.log
编辑并在最后一行添加Hive
root@master1:/usr/local/flume/tmp# cp test.log ./TestDir/
查看WebUI:


root@master1:/usr/local/flume/tmp# hdfs dfs -cat /library/flume/2016-06-16.1466088454203  
16/06/16 22:47:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable  
This is a test file !  
This is a test file !  
This is a test file !  
Spark Hadoop JAVA Scala  
SPark Spark  
Hadoop  
Hive  

8.测试Failover(故障转移)
我们来测试下Flume NG集群的高可用(故障转移)。场景如下:我们在Agent1节点上传文件,由于我们配置Collector1的权重比Collector2大,所以Collector1优先采集并上传到存储系统。然后我们kill掉Collector1,此时有Collector2负责日志的采集上传工作,之后,我们手动恢复Collector1节点的Flume服务,再次在Agent1上次文件,发现Collector1恢复优先级别的采集工作:
worker1上:
root@worker1:~# jps 
23970 Application 
23826 NodeManager 
24677 Jps 
23690 DataNode 
root@worker1:~# kill -9 23970 
root@worker1:~# jps 
24688 Jps 
23826 NodeManager 
23690 DataNode 

此时worker1控制台上也显示Collector1被杀死了。
再次拷贝1个日志文件到TestDir目录:(拷贝的文件不要和之前拷贝到TestDir中的文件重名)
master1上:
root@master1:/usr/local/flume/tmp# vim test_3.log  
root@master1:/usr/local/flume/tmp# cat test_3.log   
Test Failover  
Test Failover  
root@master1:/usr/local/flume/tmp# cp test_3.log ./TestDir/ 


master1控制台,显示监听到文件变化,并上传到hdfs,之后会显示sink1报错 ,并一直提示
Attempting to create Avro Rpc client.  
16/06/16 22:41:11 INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.  
16/06/16 22:41:11 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/flume/tmp/TestDir/test_3.log to /usr/local/flume/tmp/TestDir/test_3.log.COMPLETED  
16/06/16 22:41:19 WARN sink.FailoverSinkProcessor: Sink k1 failed and has been sent to failover list  
org.apache.flume.EventDeliveryException: Failed to send events  
    at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:392)  
    at org.apache.flume.sink.FailoverSinkProcessor.process(FailoverSinkProcessor.java:182)  
    at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)  
    at java.lang.Thread.run(Thread.java:745)  
Caused by: org.apache.flume.EventDeliveryException: NettyAvroRpcClient { host: worker1, port: 52020 }: Failed to send batch  
    at org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:315)  
    at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:376)  
    ... 3 more  
Caused by: org.apache.flume.EventDeliveryException: NettyAvroRpcClient { host: worker1, port: 52020 }: RPC request exception  
    at org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:365)  
    at org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:303)  
    ... 4 more  
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Error connecting to worker1/192.168.112.131:52020  
    at java.util.concurrent.FutureTask.report(FutureTask.java:122)  
    at java.util.concurrent.FutureTask.get(FutureTask.java:206)  
    at org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:357)  
    ... 5 more  
Caused by: java.io.IOException: Error connecting to worker1/192.168.112.131:52020  
    at org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:261)  
    at org.apache.avro.ipc.NettyTransceiver.getRemoteName(NettyTransceiver.java:386)  
    at org.apache.avro.ipc.Requestor.writeHandshake(Requestor.java:202)  
    at org.apache.avro.ipc.Requestor.access$300(Requestor.java:52)  
    at org.apache.avro.ipc.Requestor$Request.getBytes(Requestor.java:478)  
    at org.apache.avro.ipc.Requestor.request(Requestor.java:147)  
    at org.apache.avro.ipc.Requestor.request(Requestor.java:129)  
    at org.apache.avro.ipc.specific.SpecificRequestor.invoke(SpecificRequestor.java:84)  
    at com.sun.proxy.$Proxy5.appendBatch(Unknown Source)  
    at org.apache.flume.api.NettyAvroRpcClient$2.call(NettyAvroRpcClient.java:348)  
    at org.apache.flume.api.NettyAvroRpcClient$2.call(NettyAvroRpcClient.java:344)  
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)  
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)  
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)  
    ... 1 more  
Caused by: java.net.ConnectException: 拒绝连接  
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)  
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:496)  
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:452)  
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:365)  
    ... 3 more  
16/06/16 22:41:22 INFO sink.AbstractRpcSink: Rpc sink k1: Building RpcClient with hostname: worker1, port: 52020  
16/06/16 22:41:22 INFO sink.AvroSink: Attempting to create Avro Rpc client.  
16/06/16 22:41:22 WARN api.NettyAvroRpcClient: Using default maxIOWorkers  
16/06/16 22:41:26 INFO sink.AbstractRpcSink: Rpc sink k1: Building RpcClient with hostname: worker1, port: 52020  
16/06/16 22:41:26 INFO sink.AvroSink: Attempting to create Avro Rpc client.  
16/06/16 22:41:26 WARN api.NettyAvroRpcClient: Using default maxIOWorkers  
16/06/16 22:41:37 INFO sink.AbstractRpcSink: Rpc sink k1: Building RpcClient with hostname: worker1, port: 52020  


worker2的控制台日志输出:
 
16/06/16 22:41:22 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false  
16/06/16 22:41:23 INFO hdfs.BucketWriter: Creating /library/flume/2016-06-16.1466088082911.tmp  
16/06/16 22:41:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable  
16/06/16 22:41:26 INFO hdfs.BucketWriter: Closing /library/flume/2016-06-16.1466088082911.tmp  
16/06/16 22:41:26 INFO hdfs.BucketWriter: Renaming /library/flume/2016-06-16.1466088082911.tmp to /library/flume/2016-06-16.1466088082911  
16/06/16 22:41:26 INFO hdfs.HDFSEventSink: Writer callback called.  


查看WebUI:


查看内容:
root@master1:/usr/local/flume/tmp# hdfs dfs -cat /library/flume/2016-06-16.1466088082911  
16/06/16 22:51:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable  
Test Failover  
Test Failover  


重启collector1的服务,worker1上运行:
root@worker1:/usr/local/flume/apache-flume-1.6.0-bin/conf# flume-ng agent -n a1 -c conf -f flume-server.properties -Dflume.root.logger=DEBUG,console  


重复上面过程,agent(master)上生成新文件,之后由collector1(worker1)重新接管服务:
master1:
root@master1:/usr/local/flume/tmp# vim test_4.log  
root@master1:/usr/local/flume/tmp# cat test_4.log   
Test Failover is good !!!   
root@master1:/usr/local/flume/tmp# cp test_4.log TestDir/  


master1控制台:
16/06/16 22:57:00 INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.  
16/06/16 22:57:00 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/flume/tmp/TestDir/test_4.log to /usr/local/flume/tmp/TestDir/test_4.log.COMPLETED  


Worker1上日志:
16/06/16 22:57:03 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false  
16/06/16 22:57:03 INFO hdfs.BucketWriter: Creating /library/flume/2016-06-16.1466089023374.tmp  
16/06/16 22:57:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable  
16/06/16 22:57:06 INFO hdfs.BucketWriter: Closing /library/flume/2016-06-16.1466089023374.tmp  
16/06/16 22:57:06 INFO hdfs.BucketWriter: Renaming /library/flume/2016-06-16.1466089023374.tmp to /library/flume/2016-06-16.1466089023374  
16/06/16 22:57:06 INFO hdfs.HDFSEventSink: Writer callback called.
 


测试成功,flume确实实现了故障转移!

来源新浪微博:http://weibo.com.ilovepains/
  • 大小: 14.6 KB
  • 大小: 16.6 KB
  • 大小: 13.7 KB
  • 大小: 19.9 KB
  • 大小: 45.9 KB
  • 大小: 51.2 KB
  • 大小: 41.2 KB
  • 大小: 38.7 KB
  • 大小: 72.5 KB
  • 大小: 32.4 KB
  • 大小: 28.2 KB
分享到:
评论

相关推荐

    Flume安装详细步骤

    Flume安装详细步骤 Flume是一款基于Java的分布式日志收集系统,主要用于收集和传输大规模日志数据。下面是Flume安装的详细步骤: Step 1: 安装JDK环境 在安装Flume之前,需要确保JDK环境已经安装。这里我们使用...

    flume安装程序

    在本安装指南中,我们将深入探讨如何使用提供的`flume-1.6.0-bin.tar`安装包来安装和配置Apache Flume。 1. **下载与准备**: 首先,你需要从Apache官方网站下载Flume的最新稳定版本。在本例中,我们已有一个名为`...

    flume 安装和使用

    安装 Flume 首先需要确保你的系统上已经安装了 Java 1.7 或更高版本,并且正确配置了 Java 环境变量。你可以通过 `java -version` 命令检查 Java 版本。然后,你可以从 Flume 的官方网站...

    Flume-ng在windows环境搭建并测试+log4j日志通过Flume输出到HDFS.docx

    Flume-ng 在 Windows 环境搭建并测试 + Log4j 日志通过 Flume 输出到 HDFS Flume-ng 是一个高可用、可靠、分布式的日志聚合系统,可以实时地从各种数据源(如日志文件、网络 socket、数据库等)中收集数据,并将其...

    Hadoop中Flume安装指南

    ### Hadoop中Flume安装指南 #### 知识点一:Hadoop与Flume简介 - **Hadoop**:一个能够对大量数据进行分布式处理...通过本指南,初学者可以快速掌握Flume的安装和配置方法,进而利用Flume的强大功能来解决实际问题。

    Flume1.6.0入门:安装、部署、及flume的案例

    - **可靠性**:Flume 提供了多种机制来确保数据传输过程中的完整性和可靠性,例如通过 Channel 存储事件,即使遇到节点故障也能保证数据不丢失。 - **灵活性**:Flume 具有丰富的插件生态系统,允许用户根据需求选择...

    大数据教程-Flume安装使用实录.pdf

    从给定文件的内容来看,这篇文档主要涉及到了大数据处理组件...在实际操作过程中,还可能涉及到更高级的配置和使用技巧,如Flume的集群配置、监控和故障排查等。不过根据文件内容的限制,这部分知识没有在文档中体现。

    Flume1.8安装部署

    以下是 Flume 1.8 安装部署的详细步骤和相关知识点。 一、准备工作 1. 下载 apache Flume:访问 Apache Flume 官方网站,下载 1.8.0 版本的安装包,下载地址为 http://flume.apache.org/download.html。 2. 安装...

    集群flume详细安装步骤

    今天,我们将讨论如何在集群环境中安装和配置 Flume,並与 Kafka 进行集成。 安装 Flume 首先,下载 Flume 的安装包,并将其解压到指定的目录下。接着,创建一个配置文件 `flume.conf`,用于指定 Flume 的 Agent ...

    storm、kafka、flume性能测试

    本篇文档旨在通过对 Apache Storm、Apache Kafka 和 Apache Flume 的性能测试,评估这些组件在特定环境下的表现,并找出最优配置组合。主要关注以下几个方面: 1. **Storm 的 CPU 和内存使用情况**:了解不同配置下...

    flume在虚拟机上安装,

    ### Flume在虚拟机上的安装与配置 #### 一、Flume简介 Flume是一款高可靠、高性能的服务级数据采集工具,主要...通过上述步骤,我们可以在虚拟机环境中成功安装并配置Flume,为后续的数据采集和处理打下坚实的基础。

    flume-ng安装

    本文将指导您完成 Flume-NG 的安装和基本配置。 安装 Flume-NG 1. 先决条件:Java JDK 安装 在安装 Flume-NG 之前,需要先安装 Java JDK。您可以按照 JDK 的安装指南进行安装。 2. 下载 Flume-NG 使用 wget 命令...

    flume安装配置

    1.flume是什么? 这里简单介绍一下,它是Cloudera的一个产品 2.flume是干什么的? 收集日志的 3.flume如何搜集日志? 我们把flume比作情报人员 (1)搜集信息 (2)获取记忆信息 (3)传递报告间谍信息

    flume安装过程及其配图

    ### Flume安装过程详解及其配置步骤 #### 一、Flume简介 Apache Flume是一款高可靠、高性能的服务,用于收集、聚合和移动大量日志数据。...希望以上内容能够帮助您更好地理解和掌握Flume的安装与配置过程。

    flume安装文档

    通过正确安装、配置和使用 Flume,可以实现高效的数据流动和处理,这对于大数据分析和实时监控至关重要。理解 Flume 的基本概念、配置以及各种组件的使用,是充分利用 Flume 功能的关键。在实际应用中,可以根据具体...

    Flume学习文档(2){Flume安装部署、Flume配置文件}.docx

    Apache Flume 是一个分布式、可靠且可用于有效收集、聚合和移动大量日志数据的系统。在本文档中,我们将深入探讨Flume的安装部署以及配置文件的使用。 首先,要安装Flume,你需要访问官方网站...

    让你快速认识flume及安装和使用flume1 5传输数据 日志 到hadoop2 2 文档

    让你快速认识flume及安装和使用flume1 5传输数据 日志 到hadoop2 2 中文文档 认识 flume 1 flume 是什么 这里简单介绍一下 它是 Cloudera 的一个产品 2 flume 是干什么的 收集日志的 3 flume 如何搜集日志 我们把...

    Flume集群搭建1

    Apache Flume 是一个分布式、可靠且可用的服务,用于有效地收集、聚合和移动大量日志数据。在本场景中,我们将讨论如何在两台机器(hadoop12 和 Hadoop13)上搭建一个简单的 Flume 集群,以便进行数据推送和拉取。 ...

Global site tag (gtag.js) - Google Analytics