Storm guarantees that each message coming off a spout will be fully processed. This page describes how Storm accomplishes this guarantee and what you have to do as a user to benefit from Storm's reliability capabilities.
What does it mean for a message to be "fully processed"?
A tuple coming off a spout can trigger thousands of tuples to be created based on it. Consider, for example, the streaming word count topology:
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("sentences", new KestrelSpout("kestrel.backtype.com",
22133,
"sentence_queue",
new StringScheme()));
builder.setBolt("split", new SplitSentence(), 10)
.shuffleGrouping("sentences");
builder.setBolt("count", new WordCount(), 20)
.fieldsGrouping("split", new Fields("word"));
This topology reads sentences off of a Kestrel queue, splits the sentences into its constituent words, and then emits for each word the number of times it has seen that word before. A tuple coming off the spout triggers many tuples being created based on it: a tuple for each word in the sentence and a tuple for the updated count for each word. The tree of messages looks something like this:
Storm considers a tuple coming off a spout "fully processed" when the tuple tree has been exhausted and every message in the tree has been processed. A tuple is considered failed when its tree of messages fails to be fully processed within a specified timeout. This timeout can be configured on a topology-specific basis using the Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS configuration and defaults to 30 seconds.
What happens if a message is fully processed or fails to be fully processed?
To understand this question, let's take a look at the lifecycle of a tuple coming off of a spout. For reference, here is the interface that spouts implement (see the Javadoc for more information):
public interface ISpout extends Serializable {
void open(Map conf, TopologyContext context, SpoutOutputCollector collector);
void close();
void nextTuple();
void ack(Object msgId);
void fail(Object msgId);
}
First, Storm requests a tuple from the Spout
by calling the nextTuple
method on the Spout
. The Spout
uses the SpoutOutputCollector
provided in the open
method to emit a tuple to one of its output streams. When emitting a tuple, the Spout
provides a "message id" that will be used to identify the tuple later. For example, the KestrelSpout
reads a message off of the kestrel queue and emits as the "message id" the id provided by Kestrel for the message. Emitting a message to the SpoutOutputCollector
looks like this:
_collector.emit(new Values("field1", "field2", 3) , msgId);
Next, the tuple gets sent to consuming bolts and Storm takes care of tracking the tree of messages that is created. If Storm detects that a tuple is fully processed, Storm will call the ack
method on the originating Spout
task with the message id that the Spout
provided to Storm. Likewise, if the tuple times-out Storm will call the fail
method on the Spout
. Note that a tuple will be acked or failed by the exact same Spout
task that created it. So if a Spout
is executing as many tasks across the cluster, a tuple won't be acked or failed by a different task than the one that created it.
Let's use KestrelSpout
again to see what a Spout
needs to do to guarantee message processing. When KestrelSpout
takes a message off the Kestrel queue, it "opens" the message. This means the message is not actually taken off the queue yet, but instead placed in a "pending" state waiting for acknowledgement that the message is completed. While in the pending state, a message will not be sent to other consumers of the queue. Additionally, if a client disconnects all pending messages for that client are put back on the queue. When a message is opened, Kestrel provides the client with the data for the message as well as a unique id for the message. The KestrelSpout
uses that exact id as the "message id" for the tuple when emitting the tuple to the SpoutOutputCollector
. Sometime later on, when ack
or fail
are called on the KestrelSpout
, the KestrelSpout
sends an ack or fail message to Kestrel with the message id to take the message off the queue or have it put back on.
What is Storm's reliability API?
There's two things you have to do as a user to benefit from Storm's reliability capabilities. First, you need to tell Storm whenever you're creating a new link in the tree of tuples. Second, you need to tell Storm when you have finished processing an individual tuple. By doing both these things, Storm can detect when the tree of tuples is fully processed and can ack or fail the spout tuple appropriately. Storm's API provides a concise way of doing both of these tasks.
Specifying a link in the tuple tree is called anchoring. Anchoring is done at the same time you emit a new tuple. Let's use the following bolt as an example. This bolt splits a tuple containing a sentence into a tuple for each word:
public class SplitSentence extends BaseRichBolt {
OutputCollector _collector;
public void prepare(Map conf, TopologyContext context, OutputCollector collector) {
_collector = collector;
}
public void execute(Tuple tuple) {
String sentence = tuple.getString(0);
for(String word: sentence.split(" ")) {
_collector.emit(tuple, new Values(word));
}
_collector.ack(tuple);
}
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word"));
}
}
Each word tuple is anchored by specifying the input tuple as the first argument to emit
. Since the word tuple is anchored, the spout tuple at the root of the tree will be replayed later on if the word tuple failed to be processed downstream. In contrast, let's look at what happens if the word tuple is emitted like this:
_collector.emit(new Values(word));
Emitting the word tuple this way causes it to be unanchored. If the tuple fails be processed downstream, the root tuple will not be replayed. Depending on the fault-tolerance guarantees you need in your topology, sometimes it's appropriate to emit an unanchored tuple.
An output tuple can be anchored to more than one input tuple. This is useful when doing streaming joins or aggregations. A multi-anchored tuple failing to be processed will cause multiple tuples to be replayed from the spouts. Multi-anchoring is done by specifying a list of tuples rather than just a single tuple. For example:
List<Tuple> anchors = new ArrayList<Tuple>();
anchors.add(tuple1);
anchors.add(tuple2);
_collector.emit(anchors, new Values(1, 2, 3));
Multi-anchoring adds the output tuple into multiple tuple trees. Note that it's also possible for multi-anchoring to break the tree structure and create tuple DAGs, like so:
Storm's implementation works for DAGs as well as trees (pre-release it only worked for trees, and the name "tuple tree" stuck).
Anchoring is how you specify the tuple tree -- the next and final piece to Storm's reliability API is specifying when you've finished processing an individual tuple in the tuple tree. This is done by using the ack
and fail
methods on the OutputCollector
. If you look back at the SplitSentence
example, you can see that the input tuple is acked after all the word tuples are emitted.
You can use the fail
method on the OutputCollector
to immediately fail the spout tuple at the root of the tuple tree. For example, your application may choose to catch an exception from a database client and explicitly fail the input tuple. By failing the tuple explicitly, the spout tuple can be replayed faster than if you waited for the tuple to time-out.
Every tuple you process must be acked or failed. Storm uses memory to track each tuple, so if you don't ack/fail every tuple, the task will eventually run out of memory.
A lot of bolts follow a common pattern of reading an input tuple, emitting tuples based on it, and then acking the tuple at the end of the execute
method. These bolts fall into the categories of filters and simple functions. Storm has an interface called BasicBolt
that encapsulates this pattern for you. The SplitSentence
example can be written as a BasicBolt
like follows:
public class SplitSentence extends BaseBasicBolt {
public void execute(Tuple tuple, BasicOutputCollector collector) {
String sentence = tuple.getString(0);
for(String word: sentence.split(" ")) {
collector.emit(new Values(word));
}
}
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word"));
}
}
This implementation is simpler than the implementation from before and is semantically identical. Tuples emitted to BasicOutputCollector
are automatically anchored to the input tuple, and the input tuple is acked for you automatically when the execute method completes.
In contrast, bolts that do aggregations or joins may delay acking a tuple until after it has computed a result based on a bunch of tuples. Aggregations and joins will commonly multi-anchor their output tuples as well. These things fall outside the simpler pattern of IBasicBolt
.
How do I make my applications work correctly given that tuples can be replayed?
As always in software design, the answer is "it depends." Storm 0.7.0 introduced the "transactional topologies" feature, which enables you to get fully fault-tolerant exactly-once messaging semantics for most computations. Read more about transactional topologies here.
How does Storm implement reliability in an efficient way?
A Storm topology has a set of special "acker" tasks that track the DAG of tuples for every spout tuple. When an acker sees that a DAG is complete, it sends a message to the spout task that created the spout tuple to ack the message. You can set the number of acker tasks for a topology in the topology configuration using Config.TOPOLOGY_ACKERS. Storm defaults TOPOLOGY_ACKERS to one task -- you will need to increase this number for topologies processing large amounts of messages.
The best way to understand Storm's reliability implementation is to look at the lifecycle of tuples and tuple DAGs. When a tuple is created in a topology, whether in a spout or a bolt, it is given a random 64 bit id. These ids are used by ackers to track the tuple DAG for every spout tuple.
Every tuple knows the ids of all the spout tuples for which it exists in their tuple trees. When you emit a new tuple in a bolt, the spout tuple ids from the tuple's anchors are copied into the new tuple. When a tuple is acked, it sends a message to the appropriate acker tasks with information about how the tuple tree changed. In particular it tells the acker "I am now completed within the tree for this spout tuple, and here are the new tuples in the tree that were anchored to me".
For example, if tuples "D" and "E" were created based on tuple "C", here's how the tuple tree changes when "C" is acked:
Since "C" is removed from the tree at the same time that "D" and "E" are added to it, the tree can never be prematurely completed.
There are a few more details to how Storm tracks tuple trees. As mentioned already, you can have an arbitrary number of acker tasks in a topology. This leads to the following question: when a tuple is acked in the topology, how does it know to which acker task to send that information?
Storm uses mod hashing to map a spout tuple id to an acker task. Since every tuple carries with it the spout tuple ids of all the trees they exist within, they know which acker tasks to communicate with.
Another detail of Storm is how the acker tasks track which spout tasks are responsible for each spout tuple they're tracking. When a spout task emits a new tuple, it simply sends a message to the appropriate acker telling it that its task id is responsible for that spout tuple. Then when an acker sees a tree has been completed, it knows to which task id to send the completion message.
Acker tasks do not track the tree of tuples explicitly. For large tuple trees with tens of thousands of nodes (or more), tracking all the tuple trees could overwhelm the memory used by the ackers. Instead, the ackers take a different strategy that only requires a fixed amount of space per spout tuple (about 20 bytes). This tracking algorithm is the key to how Storm works and is one of its major breakthroughs.
An acker task stores a map from a spout tuple id to a pair of values. The first value is the task id that created the spout tuple which is used later on to send completion messages. The second value is a 64 bit number called the "ack val". The ack val is a representation of the state of the entire tuple tree, no matter how big or how small. It is simply the xor of all tuple ids that have been created and/or acked in the tree.
When an acker task sees that an "ack val" has become 0, then it knows that the tuple tree is completed. Since tuple ids are random 64 bit numbers, the chances of an "ack val" accidentally becoming 0 is extremely small. If you work the math, at 10K acks per second, it will take 50,000,000 years until a mistake is made. And even then, it will only cause data loss if that tuple happens to fail in the topology.
Now that you understand the reliability algorithm, let's go over all the failure cases and see how in each case Storm avoids data loss:
- A tuple isn't acked because the task died: In this case the spout tuple ids at the root of the trees for the failed tuple will time out and be replayed.
- Acker task dies: In this case all the spout tuples the acker was tracking will time out and be replayed.
- Spout task dies: In this case the source that the spout talks to is responsible for replaying the messages. For example, queues like Kestrel and RabbitMQ will place all pending messages back on the queue when a client disconnects.
As you have seen, Storm's reliability mechanisms are completely distributed, scalable, and fault-tolerant.
Tuning reliability
Acker tasks are lightweight, so you don't need very many of them in a topology. You can track their performance through the Storm UI (component id "__acker"). If the throughput doesn't look right, you'll need to add more acker tasks.
If reliability isn't important to you -- that is, you don't care about losing tuples in failure situations -- then you can improve performance by not tracking the tuple tree for spout tuples. Not tracking a tuple tree halves the number of messages transferred since normally there's an ack message for every tuple in the tuple tree. Additionally, it requires fewer ids to be kept in each downstream tuple, reducing bandwidth usage.
There are three ways to remove reliability. The first is to set Config.TOPOLOGY_ACKERS to 0. In this case, Storm will call the ack
method on the spout immediately after the spout emits a tuple. The tuple tree won't be tracked.
The second way is to remove reliability on a message by message basis. You can turn off tracking for an individual spout tuple by omitting a message id in the SpoutOutputCollector.emit
method.
Finally, if you don't care if a particular subset of the tuples downstream in the topology fail to be processed, you can emit them as unanchored tuples. Since they're not anchored to any spout tuples, they won't cause any spout tuples to fail if they aren't acked.
相关推荐
全国大学生智能汽车竞赛自2006年起,由教育部高等教育司委托高等学校自动化类教学指导委员会举办,旨在加强学生实践、创新能力和培养团队精神的一项创意性科技竞赛。该竞赛至今已成功举办多届,吸引了众多高校学生的积极参与,此文件为智能车竞赛介绍
字卡v4.3.4 原版 三种UI+关键字卡控制+支持获取用户信息+支持强制关注 集卡模块从一开始的版本到助力版本再到现在的新规则版本。 集卡模块难度主要在于 如何控制各种不同的字卡组合 被粉丝集齐的数量。 如果不控制那么一定会出现超过数量的粉丝集到指定的字卡组合,造成奖品不够的混乱,如果大奖价值高的话,超过数量的粉丝集到大奖后,就造成商家的活动费用超支了。我们冥思苦想如何才能限制集到指定字卡组合的粉丝数,后我们想到了和支付宝一样的选一张关键字卡来进行规则设置的方式来进行限制,根据奖品所需的关键字卡数,设定规则就可以控制每种奖品所需字卡组合被粉丝集到的数量,规则可以在活动进行中根据需要进行修改,活动规则灵活度高。新版的集卡规则,在此次政府发布号的活动中经受了考验,集到指定字卡组合的粉丝没有超出规则限制。有了这个规则限制后,您无需盯着活动,建好活动后就无人值守让活动进行就行了,您只需要时不时来看下蹭蹭上涨的活动数据即可。 被封? 无需担心,模块内置有防封功能,支持隐藏主域名,显示炮灰域名,保护活动安全进行。 活动准备? 只需要您有一个认证服务号即可,支持订阅号借用认证服务号来做活动。如果您
出口设备线体程序详解:PLC通讯下的V90控制与开源FB284工艺对象实战指南,出口设备线体程序详解:PLC通讯与V90控制集成,工艺对象与FB284协同工作,开源学习V90控制技能,出口设备1200线体程序,多个plc走通讯,内部有多个v90,采用工艺对象与fb284 共同控制,功能快全部开源,能快速学会v90的控制 ,出口设备; 1200线体程序; PLC通讯; 多个V90; 工艺对象; FB284; 功能开源; V90控制。,V90工艺控制:开源功能快,快速掌握1200线体程序与PLC通讯
基于Arduino与DAC8031的心电信号模拟器资料:心电信号与正弦波的双重输出应用方案,Arduino与DAC8031心电信号模拟器:生成心电信号与正弦波输出功能详解,基于arduino +DAC8031的心电信号模拟器资料,可输出心电信号,和正弦波 ,基于Arduino;DAC8031;心电信号模拟器;输出心电信号;正弦波输出;模拟器资料,基于Arduino与DAC8031的心电信号模拟器:输出心电与正弦波
MATLAB口罩检测的基本流程 图像采集:通过摄像头或其他图像采集设备获取包含面部的图像。 图像预处理:对采集到的图像进行灰度化、去噪、直方图均衡化等预处理操作,以提高图像质量,便于后续的人脸检测和口罩检测。 人脸检测:利用Haar特征、LBP特征等经典方法或深度学习模型(如MTCNN、FaceBoxes等)在预处理后的图像中定位人脸区域。 口罩检测:在检测到的人脸区域内,进一步分析是否佩戴口罩。这可以通过检测口罩的边缘、纹理等特征,或使用已经训练好的口罩检测模型来实现。 结果输出:将检测结果以可视化方式展示,如在图像上标注人脸和口罩区域,或输出文字提示是否佩戴口罩。
1、文件内容:kernel-debug-devel-3.10.0-1160.119.1.el7.rpm以及相关依赖 2、文件形式:tar.gz压缩包 3、安装指令: #Step1、解压 tar -zxvf /mnt/data/output/kernel-debug-devel-3.10.0-1160.119.1.el7.tar.gz #Step2、进入解压后的目录,执行安装 sudo rpm -ivh *.rpm 4、更多资源/技术支持:公众号禅静编程坊
该文档提供了一个关于供应链管理系统开发的详细指南,重点介绍了项目安排、技术实现和框架搭建的相关内容。 文档分为以下几个关键部分: 项目安排:主要步骤包括搭建框架(1天),基础数据模块和权限管理(4天),以及应收应付和销售管理(5天)。 供应链概念:供应链系统的核心流程是通过采购商品放入仓库,并在销售时从仓库提取商品,涉及三个主要订单:采购订单、销售订单和调拨订单。 大数据的应用:介绍了数据挖掘、ETL(数据抽取)和BI(商业智能)在供应链管理中的应用。 技术实现:讲述了DAO(数据访问对象)的重用、服务层的重用、以及前端JS的继承机制、jQuery插件开发等技术细节。 系统框架搭建:包括Maven环境的配置、Web工程的创建、持久化类和映射文件的编写,以及Spring配置文件的实现。 DAO的需求和功能:供应链管理系统的各个模块都涉及分页查询、条件查询、删除、增加、修改操作等需求。 泛型的应用:通过示例说明了在Java语言中如何使用泛型来实现模块化和可扩展性。 文档非常技术导向,适合开发人员参考,用于构建供应链管理系统的架构和功能模块。
1.版本:matlab2014/2019a/2024a 2.附赠案例数据可直接运行matlab程序。 3.代码特点:参数化编程、参数可方便更改、代码编程思路清晰、注释明细。 4.适用对象:计算机,电子信息工程、数学等专业的大学生课程设计、期末大作业和毕业设计。
C#与VB实现欧姆龙PLC的Fins TCP通信案例源码:调用动态链接库进行数据读写,定时器与计数器数据区的简洁读写操作示例,C#与VB实现欧姆龙PLC的Fins TCP通信案例源码:调用动态链接库进行读写操作,涵盖定时器计数器数据区学习案例,C#欧姆龙plc Fins Tcp通信案例上位机源码,有c#和VB的Demo,c#上位机和欧姆龙plc通讯案例源码,调用动态链接库,可以实现上位机的数据连接,可以简单实现D区W区定时器计数器等数据区的读写,是一个非常好的学习案例 ,C#; 欧姆龙PLC; Fins Tcp通信; 上位机源码; 动态链接库; 数据连接; D区W区读写; 定时器计数器; 学习案例,C#实现欧姆龙PLC Fins Tcp通信上位机源码,读写数据区高效学习案例
可调谐石墨烯超材料吸收体的FDTD仿真模拟研究报告:吸收光谱的化学势调节策略与仿真源文件解析,可调谐石墨烯超材料吸收体:化学势调节光谱的FDTD仿真模拟研究,可调谐石墨烯超材料吸收体FDTD仿真模拟 【案例内容】该案例提供了一种可调谐石墨烯超材料吸收体,其吸收光谱可以通过改变施加于石墨烯的化学势来进行调节。 【案例文件】仿真源文件 ,可调谐石墨烯超材料吸收体; FDTD仿真模拟; 化学势调节; 仿真源文件,石墨烯超材料吸收体:FDTD仿真调节吸收光谱案例解析
RBF神经网络控制仿真-第二版
松下PLC与威纶通触摸屏转盘设备控制:FPWINPRO7与EBPRO智能编程与宏指令应用,松下PLC与威纶通触摸屏转盘设备控制解决方案:FPWINPRO7与EBPRO协同工作,实现多工位转盘加工与IEC编程模式控制,松下PLC+威纶通触摸屏的转盘设备 松下PLC工程使用程序版本为FPWINPRO7 7.6.0.0版本 威纶通HMI工程使用程序版本为EBPRO 6.07.02.410S 1.多工位转盘加工控制。 2.国际标准IEC编程模式。 3.触摸屏宏指令应用控制。 ,松下PLC; 威纶通触摸屏; 转盘设备控制; 多工位加工控制; IEC编程模式; 触摸屏宏指令应用,松下PLC与威纶通HMI联控的转盘设备控制程序解析
基于循环神经网络(RNN)的多输入单输出预测模型(适用于时间序列预测与回归分析,需Matlab 2021及以上版本),基于循环神经网络(RNN)的多输入单输出预测模型(matlab版本2021+),真实值与预测值对比,多种评价指标与线性拟合展示。,RNN预测模型做多输入单输出预测模型,直接替数据就可以用。 程序语言是matlab,需求最低版本为2021及以上。 程序可以出真实值和预测值对比图,线性拟合图,可打印多种评价指标。 PS:以下效果图为测试数据的效果图,主要目的是为了显示程序运行可以出的结果图,具体预测效果以个人的具体数据为准。 2.由于每个人的数据都是独一无二的,因此无法做到可以任何人的数据直接替就可以得到自己满意的效果。 这段程序主要是一个基于循环神经网络(RNN)的预测模型。它的应用领域可以是时间序列预测、回归分析等。下面我将对程序的运行过程进行详细解释和分析。 首先,程序开始时清空环境变量、关闭图窗、清空变量和命令行。然后,通过xlsread函数导入数据,其中'数据的输入'和'数据的输出'是两个Excel文件的文件名。 接下来,程序对数据进行归一化处理。首先使用ma
1.版本:matlab2014/2019a/2024a 2.附赠案例数据可直接运行matlab程序。 3.代码特点:参数化编程、参数可方便更改、代码编程思路清晰、注释明细。 4.适用对象:计算机,电子信息工程、数学等专业的大学生课程设计、期末大作业和毕业设计。
旅游管理系统中的功能模块主要是实现管理员;首页、个人中心、用户管理、旅游方案管理、旅游购买管理、系统管理,用户;首页、个人中心、旅游方案管理、旅游购买管理、我的收藏管理。前台首页;首页、旅游方案、旅游资讯、个人中心、后台管理等功能。经过认真细致的研究,精心准备和规划,最后测试成功,系统可以正常使用。分析功能调整与旅游管理系统实现的实际需求相结合,讨论了Java开发旅游管理系统的使用。 从上面的描述中可以基本可以实现软件的功能: 1、开发实现旅游管理系统的整个系统程序; 2、管理员;首页、个人中心、用户管理、旅游方案管理、旅游购买管理、系统管理等。 3、用户:首页、个人中心、旅游方案管理、旅游购买管理、我的收藏管理。 4、前台首页:首页、旅游方案、旅游资讯、个人中心、后台管理等相应操作; 5、基础数据管理:实现系统基本信息的添加、修改及删除等操作,并且根据需求进行交流查看及回复相应操作。
Boost二级升压光伏并网结构的Simulink建模与MPPT最大功率点追踪:基于功率反馈的扰动观察法调整电压方向研究,Boost二级升压光伏并网结构的Simulink建模与MPPT最大功率点追踪:基于功率反馈的扰动观察法调整电压方向研究,Boost二级升压光伏并网结构,Simulink建模,MPPT最大功率点追踪,扰动观察法采用功率反馈方式,若ΔP>0,说明电压调整的方向正确,可以继续按原方向进行“干扰”;若ΔP<0,说明电压调整的方向错误,需要对“干扰”的方向进行改变。 ,Boost升压;光伏并网结构;Simulink建模;MPPT最大功率点追踪;扰动观察法;功率反馈;电压调整方向。,光伏并网结构中Boost升压MPPT控制策略的Simulink建模与功率反馈扰动观察法
运行GUI版本,可二开
Deepseek相关主题资源及行业影响
WP Smush Pro 是一款专为 WordPress 网站设计的图像优化插件。 一、主要作用 图像压缩 它能够在不影响图像质量的前提下,大幅度减小图像文件的大小。例如,对于一些高分辨率的产品图片或者风景照片,它可以通过先进的压缩算法,去除图像中多余的数据。通常 JPEG 格式的图像经过压缩后,文件大小可以减少 40% – 70% 左右。这对于网站性能优化非常关键,因为较小的图像文件可以加快网站的加载速度。 该插件支持多种图像格式的压缩,包括 JPEG、PNG 和 GIF。对于 PNG 图像,它可以在保留透明度等关键特性的同时,有效地减小文件尺寸。对于 GIF 图像,也能在一定程度上优化文件大小,减少动画 GIF 的加载时间。 懒加载 WP Smush Pro 实现了图像懒加载功能。懒加载是一种延迟加载图像的技术,当用户滚动页面到包含图像的位置时,图像才会加载。这样可以避免一次性加载大量图像,尤其是在页面内容较多且包含许多图像的情况下。例如,在一个新闻网站的长文章页面,带有大量配图,懒加载可以让用户在浏览文章开头部分时,不需要等待所有图片加载,从而提高页面的初始加载速度,同时也能
Could not create share link. Missing file: C:\Users\xx\.conda\envs\omni\Lib\site-packages\gradio\frpc_windows_amd64_v0.3 1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.3/frpc_windows_amd64.exe 2. Rename the downloaded file to: frpc_windows_amd64_v0.3 3. Move the file to this location: C:\Users\xx\.conda\envs\omni\Lib\site-packages\gradio