一、前言
随着新兴的大数据处理框架不断推陈出新,storm已经逐渐淡出了人们的视野,storm的社区已经有近3年无人维护。storm已经被Flink替代,接下来,我们就以Flink框架作为大数据处理的核心框架进行研究,并把storm中的代码移植到Flink中。
先分析一下Flink的优势,了解一下选用Flink的优势和必要性。
具体的Flink优势和特点,可以从网上进行搜索去查看。具体内容,参考以下几个链接:
https://www.infoq.cn/article/fRt1RF1pxu_ZtmeObOoJ
本文主要以实战为主,先构建Flink的运行环境,通过实际运行代码来感受和分析Flink。
二、helloworld 初试
1.1 搭建运行环境
使用docker快速搭建和启动Flink环境。docker的具体使用,请网上自行检索。
docker-compose.yml
version: "2.1"
services:
jobmanager:
image: flink
expose:
- "6123"
ports:
- "8081:8081"
command: jobmanager
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
taskmanager:
image: flink
expose:
- "6121"
- "6122"
depends_on:
- jobmanager
command: taskmanager
links:
- "jobmanager:jobmanager"
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
1.2 编写 helloword
maven工程配置文件:pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.tchen</groupId>
<artifactId>flink</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>flink</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<flink.version>1.7.0</flink.version>
<java.version>1.8</java.version>
<scala.binary.version>2.11</scala.binary.version>
<maven.compiler.source>${java.version}</maven.compiler.source>
<maven.compiler.target>${java.version}</maven.compiler.target>
</properties>
<repositories>
<repository>
<id>apache.snapshots</id>
<name>Apache Development Snapshot Repository</name>
<url>https://repository.apache.org/content/repositories/snapshots/</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
<dependencies>
<!-- Apache Flink dependencies -->
<!-- These dependencies are provided, because they should not be packaged into the JAR file. -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<!-- Add connector dependencies here. They must be in the default scope (compile). -->
<!-- Example:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.10_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
-->
<!-- Add logging framework, to produce console output when running in the IDE. -->
<!-- These dependencies are excluded from the application JAR by default. -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.7</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
<scope>runtime</scope>
</dependency>
</dependencies>
<build>
<plugins>
<!-- Java Compiler -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>${java.version}</source>
<target>${java.version}</target>
</configuration>
</plugin>
<!-- We use the maven-shade plugin to create a fat jar that contains all necessary dependencies. -->
<!-- Change the value of <mainClass>...</mainClass> if your program entry point changes. -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.0.0</version>
<executions>
<!-- Run shade goal on package phase -->
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<artifactSet>
<excludes>
<exclude>org.apache.flink:force-shading</exclude>
<exclude>com.google.code.findbugs:jsr305</exclude>
<exclude>org.slf4j:*</exclude>
<exclude>log4j:*</exclude>
</excludes>
</artifactSet>
<filters>
<filter>
<!-- Do not copy the signatures in the META-INF folder.
Otherwise, this might cause SecurityExceptions when using the JAR. -->
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>com.dalong.app.StreamingJob</mainClass>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
<pluginManagement>
<plugins>
<!-- This improves the out-of-the-box experience in Eclipse by resolving some warnings. -->
<plugin>
<groupId>org.eclipse.m2e</groupId>
<artifactId>lifecycle-mapping</artifactId>
<version>1.0.0</version>
<configuration>
<lifecycleMappingMetadata>
<pluginExecutions>
<pluginExecution>
<pluginExecutionFilter>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<versionRange>[3.0.0,)</versionRange>
<goals>
<goal>shade</goal>
</goals>
</pluginExecutionFilter>
<action>
<ignore/>
</action>
</pluginExecution>
<pluginExecution>
<pluginExecutionFilter>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<versionRange>[3.1,)</versionRange>
<goals>
<goal>testCompile</goal>
<goal>compile</goal>
</goals>
</pluginExecutionFilter>
<action>
<ignore/>
</action>
</pluginExecution>
</pluginExecutions>
</lifecycleMappingMetadata>
</configuration>
</plugin>
</plugins>
</pluginManagement>
</build>
<!-- This profile helps to make things run out of the box in IntelliJ -->
<!-- Its adds Flink's core classes to the runtime class path. -->
<!-- Otherwise they are missing in IntelliJ, because the dependency is 'provided' -->
<profiles>
<profile>
<id>add-dependencies-for-IDEA</id>
<activation>
<property>
<name>idea.version</name>
</property>
</activation>
<dependencies>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>${flink.version}</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
<scope>compile</scope>
</dependency>
</dependencies>
</profile>
</profiles>
</project>
java程序:helloworld.java
package com.tchen.flink.helloworld;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.functions.ReduceFunction;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.util.Collector;
public class SocketWindowWordCount {
public static void main(String[] args) throws Exception {
// 输入tcp流
final int port;
final String host;
port = 9008; // nc监听的tcp端口
host = "10.10.50.70"; // docker宿主机ip
// get the execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// get input data by connecting to the socket
DataStream<String> text = env.socketTextStream(host, port, "\n");
// parse the data, group it, window it, and aggregate the counts
DataStream<WordWithCount> windowCounts = text
.flatMap(new FlatMapFunction<String, WordWithCount>() {
// @Override
public void flatMap(String value, Collector<WordWithCount> out) {
for (String word : value.split("\\s")) {
out.collect(new WordWithCount(word, 1L));
}
}
})
.keyBy("word")
.timeWindow(Time.seconds(5), Time.seconds(1))
.reduce(new ReduceFunction<WordWithCount>() {
// @Override
public WordWithCount reduce(WordWithCount a, WordWithCount b) {
return new WordWithCount(a.word, a.count + b.count);
}
});
// print the results with a single thread, rather than in parallel
windowCounts.print().setParallelism(1);
env.execute("Socket Window WordCount");
}
// Data type for words with count
public static class WordWithCount {
public String word;
public long count;
public WordWithCount() {}
public WordWithCount(String word, long count) {
this.word = word;
this.count = count;
}
@Override
public String toString() {
return word + " : " + count;
}
}
}
三、本地调试
示例代码:FlinkLocal.java
package com.tchen.flink.local;
import org.apache.flink.api.common.JobExecutionResult;
import org.apache.flink.api.common.functions.FilterFunction;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
public class FlinkLocal {
public static void main(String[] args) throws Exception {
ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment();
DataSet<String> data = env.readTextFile("file:///d:/flink-data.txt");
data
.filter(new FilterFunction<String>() {
public boolean filter(String value) {
return value.startsWith("http://");
}
})
.writeAsText("file:///d:/flink-data-out.txt");
JobExecutionResult res = env.execute();
}
}
参考资料:
官方:
https://ci.apache.org/projects/flink/flink-docs-release-1.9/getting-started/tutorials/local_setup.html
https://github.com/apache/flink/blob/master/flink-examples/flink-examples-streaming/pom.xml
https://flink-docs-cn.gitbook.io/project/05-ying-yong-kai-fa/batch-dataset-api/ben-di-zhi-hang
https://www.iamle.com/archives/2572.html
https://juejin.im/post/5c4f16dbe51d454f342fb7e7
https://www.infoq.cn/article/zbBAGroBgtytDiBs*Xq9
分享到:
相关推荐
Flink有一个非常重要的...课程内容包括了Flink安装部署,入门实战案例,Flink原理初探,流处理的教学,Flink高级API和Flink-Table-SQL-案例,Flink高级特性和新特性,Flink多语言开发,Flink监控与优化。 视频大小:4G
13-[掌握]-Flink原理初探-角色分工-执行流程-DataFlow 14-[掌握]-Flink原理初探-TaskSlot和TaskSlotSharing 15-[掌握]-Flink原理初探-执行流程图生成 Flink-day02 01-[理解]-流处理核心概念说明 02-[掌握]-Source-...
借鉴电商的流处理技术(如Kafka、Flink),可以实现实时数据流分析,为调度决策提供即时支持。 4. 用户画像构建:电商通过用户购物行为构建用户画像,为个性化推荐服务。公交大数据平台同样可以构建乘客画像,根据...
智能 Web 研发初探 一站式业务稳定性保障平台的 AIOps 实践 信息流广告的排序算法演进 小游戏质量保证测试实践之路 物流仓储数据分发平台架构实践及挑战 万亿规模下高吞吐低时延查询系统架构设计 数字化转型提升企业...
win7修复本地系统工具
《自动化专业英语》04-Automatic-Detection-Block(自动检测模块).ppt
《计算机专业英语》chapter12-Intelligent-Transportation.ppt
内容概要:本文详细介绍了基于西门子S7-1200博图平台的3轴伺服螺丝机程序。该程序使用SCL语言编写,结合KTP700组态和TIA V14及以上版本,实现了对X、Y、Z三个轴的精密控制。文章首先概述了程序的整体架构,强调了其在自动化控制领域的高参考价值。接着深入探讨了关键代码片段,如轴初始化、运动控制以及主程序的设计思路。此外,还展示了如何通过KTP700组态实现人机交互,并分享了一些实用的操作技巧和技术细节,如状态机设计、HMI交互、异常处理等。 适用人群:从事自动化控制系统开发的技术人员,尤其是对西门子PLC编程感兴趣的工程师。 使用场景及目标:适用于希望深入了解西门子S7-1200博图平台及其SCL语言编程特点的学习者;旨在帮助读者掌握3轴伺服系统的具体实现方法,提高实际项目中的编程能力。 其他说明:文中提供的代码示例和设计理念不仅有助于理解和学习,还能直接应用于类似的实际工程项目中。
内容概要:本文详细探讨了五种非线性滤波器(卡尔曼滤波(KF)、扩展卡尔曼滤波(EKF)、无迹卡尔曼滤波(UKF)、粒子滤波(PF)和变维卡尔曼滤波(VDKF))在水下长基线定位(LBL)系统中的应用。通过对每种滤波器的具体实现进行MATLAB代码展示,分析了它们在不同条件下的优缺点。例如,KF适用于线性系统但在非线性环境中失效;EKF通过雅可比矩阵线性化处理非线性问题,但在剧烈机动时表现不佳;UKF利用sigma点处理非线性,精度较高但计算量大;PF采用蒙特卡罗方法,鲁棒性强但计算耗时;VDKF能够动态调整状态维度,适合信标数量变化的场景。 适合人群:从事水下机器人(AUV)导航研究的技术人员、研究生以及对非线性滤波感兴趣的科研工作者。 使用场景及目标:①理解各种非线性滤波器的工作原理及其在水下定位中的具体应用;②评估不同滤波器在特定条件下的性能,以便为实际项目选择合适的滤波器;③掌握MATLAB实现非线性滤波器的方法和技术。 其他说明:文中提供了详细的MATLAB代码片段,帮助读者更好地理解和实现这些滤波器。此外,还讨论了数值稳定性问题和一些实用技巧,如Cholesky分解失败的处理方法。
VMware-workstation-full-14.1.3-9474260
DeepSeek系列-提示词工程和落地场景.pdf
javaSE阶段面试题
《综合布线施工技术》第5章-综合布线工程测试.ppt
安川机器人NX100使用说明书.pdf
内容概要:本文详细介绍了将M7120型平面磨床的传统继电器控制系统升级为基于西门子S7-1200 PLC的自动化控制系统的过程。主要内容涵盖IO分配、梯形图设计和组态画面实现。通过合理的IO分配,确保了系统的可靠性和可维护性;梯形图设计实现了主控制逻辑、砂轮升降控制和报警逻辑等功能;组态画面则提供了友好的人机交互界面,便于操作和监控。此次改造显著提高了设备的自动化水平、运行效率和可靠性,降低了维护成本。 适合人群:从事工业自动化领域的工程师和技术人员,尤其是熟悉PLC编程和控制系统设计的专业人士。 使用场景及目标:适用于需要进行老旧设备升级改造的企业,旨在提高生产设备的自动化水平和可靠性,降低故障率和维护成本。具体应用场景包括但不限于金属加工行业中的平面磨床等设备的控制系统改造。 其他说明:文中还分享了一些实际调试中的经验和技巧,如急停逻辑的设计、信号抖动的处理方法等,有助于读者在类似项目中借鉴和应用。
chromedriver-linux64-136.0.7103.48.zip
IMG_20250421_180507.jpg
《网络营销策划实务》项目一-网络营销策划认知.ppt
Lianantech_Security-Vulnerabil_1744433229
MybatisCodeHelperNew2019.1-2023.1-3.4.1