1. The Union-Find Data Structure
-- Goal : Maintain a partition of a set X.
-- FIND: Given x in X, return name of x's group.
-- UNION: Given x & y, merge groups containing them.
Basic Solution :
-- Each x in X points directly to the "leader" of its group.
-- O(1) FIND [just return x's leader]
-- O(n log n) total work for n UNIONS [when 2 groups merge, smaller group inherits leader of larger one]
2. Lazy Unions :
-- Update only one pointer each merge!
-- When two groups merge in a UNION, make one group's leader (i.e., root of the tree) a child of the other one.
Pro: UNION reduces to 2 FINDS [r1=FIND(x), r2=FIND(y)] and O(1) extra work [link r1 , r2 together]
Con: To recover leader of an object, need to follow a path of parent pointers [not just one!] (FIND takes more time)
3. The Lazy Union Implementation :
-- Initially: For all x, parent[x]=x;
-- FIND(x): Traverse parent pointers from x until you hit the root.
-- UNION(x; y): s1 =FIND(x); s2 =FIND(y); Reset parent of one of s1 , s2 to be the other.
4. Union by Rank :
-- Ranks: For each x in X, maintain field rank[x]. (In general rank[x]=1+(max rank of x's children) )
-- Invariant : For all x in X, rank[x]=maximum number of hops from some leaf to x.
-- Initially, rank[x]=0 for all x in X
-- UNION(x, y) :
a) s1=FIND(x), s2=FIND(y)
b) If rank[s1]>rank[s2] then set parent[s2] to s1 else set parent[s1] to s2.
c) No change to rank of s1 and s2 unless ranks of s1 , s2 were equal, in which case s2's rank goes up by 1
5. Properties of Ranks :
(1) For all objects x, rank[x] only goes up over time
(2) Only ranks of roots can go up [once x a non-root, rank[x] frozen for evermore]
(3) Ranks strictly increase along a path to the root
6. Rank Lemma :
-- Consider an arbitrary sequence of UNION (+FIND) operations. For every r in {0 , 1 , 2 , ... }, there are at most n/2^r objects with rank r.
-- Max rank always <= log2 n
-- Worst-case running time of FIND, UNION is O(log n). (with Union by Rank)
Proof :
-- Claim 1: If x , y have the same rank r , then their subtrees (objects from which can reach x , y) are disjoint.
Suppose subtrees of x , y have object z in common => exists path z --> x, z --> y => One of x , y is an ancestor of the other => The ancestor has strictly larger rank. Contridiction!
-- Claim 2: The subtree of a rank-r object has size >= 2^r.
Base case: Initially all ranks = 0, all subtree sizes = 1
Inductive step: Nothing to prove unless the rank of some object changes (subtree sizes only go up).
Interesting case: UNION(x , y), with s1=FIND(x), s2=FIND(y), and rank[s1]=rank[s2]=r => s2's new rank = r + 1 => s2's new subtree size = s2's old subtree size + s1's old subtree size (each at least 2^r by the inductive hypothesis) 2^(r+1).
-- Claim 1 + Claim 2 imply the Rank Lemma.
7. Path Compression :
-- Idea: Why bother traversing a leaf-root path multiple times?
-- Path compression: After FIND(x), install shortcuts (i.e., revise parent pointers) to x's root all along the x --> root path.
-- Con: Constant-factor overhead to FIND (from "multitasking").
-- Pro: Speeds up subsequent FINDs.
8. Ranks for Path Compression
-- Important: Maintain all rank fields EXACTLY as without path compression.
-- Ranks initially all 0
-- In UNION, new root = old root with bigger rank
-- When merging two nodes of common rank r , reset new root's rank to r + 1
-- Rank Lemma still holds ( n=2^r objects with rank r )
-- Still always have rank[parent[x]]>rank[x] for all non-roots x
9. Hopcroft-Ullman Theorem :
-- With Union by Rank and path compression, m Union+Find operations take O(m log* n) time, where log* n = the number of times you need to apply log to n before the result is <= 1.
Proof :
-- Rank blocks: {0} , {1} , {2} , {3 , 4} , {5 , 6 , ... , 2^4} , { 17 , 18 , ... , 2^16} , {65537 , ... , 2^65536} , ... , { ... , n} (Note: There are O(log* n) different rank blocks because the largest number of i+1 th block is 2 to the biggest number of ith block ... tower of 2 )
-- Semantics: Traversal x --> parent(x) is "fast progress" <==> rank[parent[x]] in larger block than rank[x]
-- Definition: At a given point in time, call object x good if
a) x or x's parent is a root OR
b) rank[parent[x]] in larger block than rank[x]
x is bad otherwise.
-- Every FIND visits only O(log* n) good nodes [2 + # of rank blocks = O(log* n)]
-- Consider a rank block {k + 1; k + 2; : : : ; 2^k}. When a bad node is visited => its parent is changed to one with strictly larger rank => Can only happen 2^k times before x becomes good (forevermore). Due to Rank Lemma: Total number of objects x with final rank in this rank block is Sum(i = k+1 to 2^k ) {n/2^i} <= n/2^k ==> total visits to bad objects in this rank block <= n
-- There are only O(log * n) rank blocks , so the total work = visits to good nodes + visits to bad nodes = m * O (log * n) + n * O(log * n) = O((m+n) log * n) = O(m log * n) when m = O(n)
10. The Ackermann Function
-- Many dierent denitions, all more or less equivalent.
-- Define Ak (r) for all integers k and r >= 1. (recursively)
a) Base case: A0(r ) = r + 1 for all r >= 1.
b) In general: For k , r >= 1: Ak (r ) = Apply Ak-1 r times to r
-- A1(r ) = 2r , A2(r) = r 2^r , ...
11. The Inverse Ackermann Function
-- Denition: For every n >= 4, alpha(n) = minimum value of k such that Ak(2) >= n.
12. Tarjan's Bound
-- With Union by Rank and path compression, m UNION+FIND operations take O(m alpha(n)) time, where alpha(n) is the inverse Ackerman function.
Proof :
-- Define delta(x) = max value of k such that rank[parent[x]] >= Ak (rank[x]) (Note delta(x) only goes up over time)
-- For all objects x with rank[x] >= 2, then delta(x) <= alpha(n) [Since A alpha(n)(2)>=n while the max rank can be log n]
-- Denition: An object x is bad if all of the following hold:
(1) x is not a root
(2) parent(x) is not a root
(3) rank(x) >= 2
(4) x has an ancestor y with alpha(y) = alpha(x)
x is good otherwise.
-- the maximum number of good objects on an object-root path <= 1 root + 1 child of root + 1 object with rank 0 + 1 object with rank 1 + 1 object with delta(x) = k for each k = 0, 1, 2, ... , alpha(n) = O(alpha(n))
-- Suppose a FIND operation visits a bad object x which has an ancestor y with alpha(y) = alpha(x), p is the parent of x and p' is the parent of y. After FIND , rank[x's new parent] >= rank[p'] >= Ak(rank[y]) >= Ak(rank[p]) >= Ak( Ak (rank[x]) )
-- Point: Path compression (at least) applies the Ak function to rank[x's parent]
-- If r=rank[x] ( >=2 ), then after r such pointer updates we have rank[x's parent] >= Ak(Ak(...Ak(r)..)) = Ak+1(r) ==> while x is bad , every r visits increase delta(x) , since delta(x) <= alpha(x) ==> at most r alpha(n) visits to x while it's bad.
-- Total work of m operations = visits to good objects and visits to bad objects <= O(m alpha(n) ) + Sum( all objects x) { rank[x] alpha(n) } = O(m alpha(n) ) + alpha(n) Sum ( r >= 0 ) r (# of objects with rank r) = O(m alpha(n) ) + alpha(n) Sum(r >0) r n /2^r = O(m alpha(n) ) + alpha(n) n O (1) = O( (m + n) alpha(n) )
相关推荐
UnionFind.h
### UnionFind算法详解 #### 一、UnionFind算法概述 **UnionFind算法**,也称为并查集算法,是一种用于解决动态连通性问题的数据结构。该算法主要用于判断图中的两个节点是否属于同一个连通分量,并且可以高效地...
使用unionfind算法实现最小生成树算法。。。。。。。。。。。。。
《并查集(UnionFind)》 并查集是一种数据结构,主要用来处理一些不相交集合的合并与查询问题。在计算机科学中,尤其是在图论和算法设计中,它有着广泛的应用。并查集的主要目标是支持三个基本操作:MAKE-SET、FIND...
unionfind unionfind是 Python/Cython 中不相交集森林数据结构的简单、快速实现。 该模块定义了一个类UnionFind ,其元素是连续的整数索引。 用法 使用pip安装(需要构建 cython)。 通过编写unionfind.UnionFind(n...
UnionFind算法应用.md
UnionFind算法详解.md
C#,图论与图算法,无向图(Graph)回环(Cycle)的不相交集(disjoint)或并集查找(union find)判别算法与源代码 不相交集数据结构是一种数据结构,它跟踪划分为多个不相交(非重叠)子集的一组元素。联合查找...
Union-Find数据结构 Union-Find数据结构是一种特殊的数据结构,用于管理一组不相交的集合(Disjoint-sets),支持三个基本操作:MAKE-SET、FIND和UNION。这种数据结构广泛应用于解决动态连通性问题,例如图形算法、...
Documents\GitHub\Percolates_UnionFind>java PercolationStats 2 10000 mean = 0.66595 stddev = 0.013949492449245482 95% confidence interval = 0.6636350837986437, 0.6682649162013564 编写程序以通过蒙特卡洛...
利用数据结构中的并查集结构,根据输入的迷宫的行数和列数,自动生成迷宫。
- `class UnionFind`:自定义的并查集类,包括 `find()`(查找顶点所属集合的根节点)和 `unionSet()`(合并两个集合)等方法。 - `void kruskalMST()`:Kruskal算法的主要函数,进行上述步骤。 在代码设计和变量...
联合查找 这是 Node.js 中 union-find 数据结构的一个实现。 在做个人项目时,我没有找到适合我需求的 NPM 模块并创建了它。 如果有人对此模块感兴趣,或者想要... 使用 UnionFind#inSameGroup 确定两个节点是否在同一
Python中的UnionFind实现 联合查找是一种数据结构,可保持不相交的集合(称为连接的组件或简称为组件)成员身份,并使合并(联合)两个组件以及查找两个元素是否已连接(即属于同一组件)更加容易。 )。 这实现了...
实际应用中,`UnionFind` 类还可以根据需求进行更多的优化,比如使用更高效的查找和合并算法,或者支持其他功能,如判断两个元素是否在同一集合中。 总之,`UnionFind` 数据结构是解决许多问题的强大工具,通过加权...
并查集基础 acm 算法 poj oi 并查集基础.ppt
uf := unionfind . New ( 10 ) // Union a,b connects components at index a and b uf . Union ( 1 , 2 ) uf . Union ( 2 , 3 ) uf . Union ( 5 , 6 ) uf . Union ( 4 , 6 ) fmt . PrintLn ( uf . Find
class UnionFind: def __init__(self, n): self.parent = list(range(n)) def find(self, x): if self.parent[x] != x: self.parent[x] = self.find(self.parent[x]) return self.parent[x] def union(self...
带权重的union_find可以有效降低树的高度,...public class UnionFind { private int count;//连通分量的个数 private int[] pid;//保存父亲连接节点的id private int[] sonSize;//保存各个节点作为根节点的分量大小
说明 ⽬录 第⼀章 序章 关于 LeetCode 什么是 Cookbook 为什么会写这个开源书 ...并查集 UnionFind 第四章 Leetcode 题解 1. Two Sum 2. Add Two Numbers 3. Longest Substring Without Repeating Characters 4. ......