- 浏览: 102664 次
- 性别:
- 来自: 北京
文章分类
最新评论
Priority queues are a kind of queue in which the elements are dequeued in priority order.
- They are a mutable data abstraction: enqueues and dequeues are destructive.
- Each element has a priority, an element of a totally ordered set (usually a number)
- More important things come out first, even if they were added later
- Our convention: smaller number = higher priority
- There is no (fast) operation to find out whether an arbitrary element is in the queue
- Useful for event-based simulators (with priority = simulated time), real-time games, searching, routing, compression via Huffman coding
(Turn on JavaScript to see code examples)
There are many ways to implement this signature. For example, we could implement it as a linked list, with O(n) performance for extracting the minimum element.
A better implementation would be to use a balanced binary tree such as an AVL tree, a red-black tree, or a splay tree. In a binary search tree, we can find the minimum element by simply walking down the left children as far as possible from the root. Extracting the minimum element requires deleting it from the tree, which is pretty simple because the minimum element can't have two children. This implementation has better performance for many applications:
-
insert
: O(lg n), because an element must be inserted into the tree according to its priority, which serves as the key. -
extract_min
: O(lg n), because red-black deletion also requires walking up and down the tree.
In fact, we can tell that this is the best we do in terms of asymptotic performance, because we can implement sorting using O(n) priority queue operations, and we know that sorting takes O(n lg n) time in general. The idea is simply to insert
all the elements to be sorted into the priority queue, and then use extract_min
to pull them out in the right order. (See heapsort.sml
)
If you want to use a balanced binary tree implementation off-the-shelf, try a splay tree, because tree elements are accessed with a lot of locality. The splay tree will adapt to this access pattern. However, It turns out there is another data structure with the same asymptotic performance but even better constant factors.
Binary heaps
A binary heap (often just referred to as a heap) is a special kind of balanced binary tree. The tree satisfies two invariants:
- The priorities of the children of a node are at least as large as the priority of the parent. By implication, the node at the top (root) of the tree has minimum priority.
- The different paths from root to leaf differ in height by at most one. At the bottom of the tree there may be some missing leaves; these are to the right to all of the leaves that are present.
Suppose the priorities are just numbers. Here is a possible heap:
3 / \ / \ 5 9 / \ / 12 6 10
Obviously we can find the minimum element in O(1) time. Extracting it while maintaining the heap invariant will take O(lg n) time. Inserting a new element and establishing the heap invariant will also take O(lg n) time.
So asymptotic performance is the same as for balanced binary search trees but the constant factors are better for heaps. The key observation is that we can represent a heaps as an array.
i are at locations 2i+1 and 2i+2. This means that the array corresponding to the tree contains all the elements of tree, read across row by row. The representation of the tree above is:
[3 5 9 12 6 10]
Given an element at index i, we can compute where the children are stored, and conversely we can go from a child at index j to its parent at index floor((j-1)/2). So we have a way to follow pointers from nodes to their parents and children, without actually representing the pointers!
The rep invariant for heaps in this representation is actually simpler than when in tree form:
Rep invariant for heap a (the partial ordering property):
a[i] ≤ a[2i+1] and a[i] ≤ a[2i+2]
for 1 ≤ i ≤ floor((n-1)/2)
Now let's see how to implement the priority queue operations using heaps:
insert
- Put the element at first missing leaf. (Extend array by one element.)
- Switch it with its parent if its parent is larger: "bubble up"
- Repeat #2 as necessary.
Example: inserting 4 into previous tree.
3 / \ / \ 5 9 [3 5 9 12 6 10 4] / \ / \ 12 6 10 4 3 / \ / \ 5 4 [3 5 4 12 6 10 9] / \ / \ 12 6 10 9
This operation requires only O(lg n) time -- the tree is depth
ceil(lg n) , and we do a bounded amount of work on each level.
extract_min
extract_min
works by returning the element at the root.
- Guaranteed to be the most important (smallest value) by the partial ordering property.
- Now we have the two subtrees to put right, though.
The trick is this:
- Copy a leaf (last element) to the root (first element)
- If it's larger than one of the children, bubble it down.
- Swap with the higher priority child, to make sure the parent is always more important than both children.
Original heap to delete top element from (leaves two subheaps)
3 / \ / \ 5 4 [3 5 4 12 6 10 9] / \ / \ 12 6 10 9 copy last leaf to root 9 / \ / \ 5 4 [9 5 4 12 6 10] / \ / 12 6 10 "push down" 4 / \ / \ 5 9 [9 5 4 12 6 10] / \ / 12 6 10
Again an O(lg n) operation.
SML heap code
The following code implements priority queues as binary heaps, using SML arrays.
Binomial and Fibonacci heaps
For some heap uses, we want additional operations. For finding shortest paths, we need to be able to increase the priority of an element already in the priority queue. This complicates the interface.
First, the existing signature does not acknowledge the possibility that the ordering on elements is not fixed. There are two ways to fix this: either parameterize the priority queue on two types (a priority and an element), or else have an interface in which the client notifies the priority queue of elements whose priority (and hence ordering relative to other elements) has changed.
Second, since the priority queue gives no fast way to find an element, the increase_priority
call needs to take an argument that says where in the heap to find it (concretely, the array index).
Huffman coding
Huffman coding is an elegant compression method that uses a priority queue.
Fixed-Length Codes
Suppose we want to compress a 100,000-byte data file that we know contains only the lowercase letters A through F. Since we have only six distinct characters to encode, we can represent each one with three bits rather than the eight bits normally used to store characters:
Letter A B C D E F Codeword 000 001 010 011 100 101
This fixed-length code gives us a compression ratio of 5/8 = 62.5%. Can we do better?
Variable-Length Codes
What if we knew the relative frequencies at which each letter occurred? It would be logical to assign shorter codes to the most frequent letters and save longer codes for the infrequent letters. For example, consider this code:
Letter A B C D E F Frequency (K) 45 13 12 16 9 5 Codeword 0 101 100 111 1101 1100
Using this code, our file can be represented with
(45�1 + 13�3 + 12�3 + 16�3 + 9�4 + 5�4) � 1000 = 224,000 bits
or 28,000 bytes, which gives a compression ratio of 72%. In fact, this is an optimal character code for this file (which is not to say that the file is not further compressible by other means).
Prefix Codes
Notice that in our variable-length code, no codeword is a prefix of any other codeword. For example, we have a codeword 0, so no other codeword starts with 0. And both of our four-bit codewords start with 110, which is not a codeword. A code where no codeword is a prefix of any other is called a prefix code. Prefix codes are useful because they make a stream of bits unambiguous; we simply can accumulate bits from a stream until we have completed a codeword. (Notice that encoding is simple regardless of whether our code is a prefix code: we just build a dictionary of letters to codewords, look up each letter we're trying to encode, and append the codewords to an output stream.) In turns out that prefix codes always can be used to achive the optimal compression for a character code, so we're not losing anything by restricting ourselves to this type of character code.
When we're decoding a stream of bits using a prefix code, what data structure might we want to use to help us determine whether we've read a whole codeword yet?
One convenient representation is to use a binary tree with the codewords stored in the leaves so that the bits determine the path to the leaf. This binary tree is a trie in which only the leaves map to letters. In our example, the codeword 1100 is found by starting at the root, moving down the right subtree twice and the left subtree twice:
100 / \ / \ / \ A 55 [45] / \ / \ 25 30 / \ / \ C B 14 D [12] [13] / \ [16] F E [5] [9]
Here we've labeled the leaves with their frequencies and the branches with the total frequencies of the leaves in their subtrees. You'll notice that this is a full binary tree: every nonleaf node has two children. This happens to be true of all optimal codes, so we can tell that our fixed-length code is suboptimal by examining its tree, which is clearly wasting a bit in representing E and F:
100 / \ / \ / \ / \ 86 14 / \ / / \ / 58 28 14 / \ / \ / \ A B C D E F [45] [13] [12] [16] [9] [5]
Since we can restrict ourselves to full trees, we know that for an alphabet C, we will have a tree with exactly |C| leaves and |C|−1 internal nodes. Given a tree T corresponding to a prefix code, we also can compute the number of bits required to encode a file:
B(T) = ∑ f(c) dT(c)
where f(c) is the frequency of character c and dT(c) is the depth of the character in the tree (which also is the length of the codeword for c). We call B(T) the cost of the tree T.
Huffman's Algorithm
Huffman invented a simple algorithm for constructing such trees given the set of characters and their frequencies. Like Dijkstra's algorithm, this is a greedy algorithm, which means that it makes choices that are locally optimal yet achieves a globally optimal solution.
The algorithm constructs the tree in a bottom-up way. Given a set of leaves containing the characters and their frequencies, we merge the current two subtrees with the smallest frequencies. We perform this merging by creating a parent node labeled with the sum of the frequencies of its two children. Then we repeat this process until we have performed |C|−1 mergings to produce a single tree.
As an example, use Huffman's algorithm to construct the tree for our input.
How can we implement Huffman's algorithm efficiently? The operation we need to perform repeatedly is extraction of the two subtrees with smallest frequencies, so we can use a priority queue. We can express this in ML as:
We won't prove that the result is an optimal prefix tree, but why does this algorithm produce a valid and full prefix tree? We can see that every time we merge two subtrees, we're differentiating the codewords of all of their leaves by prepending a 0 to all the codewords of the left subtree and a 1 to all the codewords of the right subtree. And every non-leaf node has exactly two children by construction.
Let's analyze the running time of this algorithm if our alphabet has n characters. Building the initial queue takes time O(n log n) since each enqueue operation takes O(log n) time. Then we perform n−1 merges, each of which takes time O(log n). Thus Huffman's algorithm takes O(n log n) time.
Adaptive Huffman Coding
If we want to compress a file with our current approach, we have to scan through the whole file to tally the frequencies of each character. Then we use the Huffman algorithm to compute an optimal prefix tree, and we scan the file a second time, writing out the codewords of each character of the file. But that's not sufficient. Why? We also need to write out the prefix tree so that the decompression algorithm knows how to interpret the stream of bits.
So our algorithm has one major potential drawback: We need to scan the whole input file before we can build the prefix tree. For large files, this can take a long time. (Disk access is very slow compared to CPU cycle times.) And in some cases it may be unreasonable; we may have a long stream of data that we'd like to compress, and it could be unreasonable to have to accumulate the data until we can scan it all. We'd like an algorithm that allows us to compress a stream of data without seeing the whole prefix tree in advance.
The solution is adaptive Huffman coding, which builds the prefix tree incrementally in such a way that the coding always is optimal for the sequence characters already seen. We start with a tree that has a frequency of zero for each character. When we read an input character, we increment the frequency of that character (and the frequency in all branches above it). We then may have to modify the tree to maintain the invariant that the least frequent characters are at the greatest depths. Because the tree is constructed incrementally, the decoding algorithm simply can update its copy of the tree after every character is decoded, so we don't need to include the prefix tree along with the compressed data.
/* * @(#)PriorityQueue.java 1.16 06/04/21 * * Copyright 2006 Sun Microsystems, Inc. All rights reserved. * SUN PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. */ package java.util; /** * An unbounded priority {@linkplain Queue queue} based on a priority heap. * The elements of the priority queue are ordered according to their * {@linkplain Comparable natural ordering}, or by a {@link Comparator} * provided at queue construction time, depending on which constructor is * used. A priority queue does not permit {@code null} elements. * A priority queue relying on natural ordering also does not permit * insertion of non-comparable objects (doing so may result in * {@code ClassCastException}). * * <p>The <em>head</em> of this queue is the <em>least</em> element * with respect to the specified ordering. If multiple elements are * tied for least value, the head is one of those elements -- ties are * broken arbitrarily. The queue retrieval operations {@code poll}, * {@code remove}, {@code peek}, and {@code element} access the * element at the head of the queue. * * <p>A priority queue is unbounded, but has an internal * <i>capacity</i> governing the size of an array used to store the * elements on the queue. It is always at least as large as the queue * size. As elements are added to a priority queue, its capacity * grows automatically. The details of the growth policy are not * specified. * * <p>This class and its iterator implement all of the * <em>optional</em> methods of the {@link Collection} and {@link * Iterator} interfaces. The Iterator provided in method {@link * #iterator()} is <em>not</em> guaranteed to traverse the elements of * the priority queue in any particular order. If you need ordered * traversal, consider using {@code Arrays.sort(pq.toArray())}. * * <p> <strong>Note that this implementation is not synchronized.</strong> * Multiple threads should not access a {@code PriorityQueue} * instance concurrently if any of the threads modifies the queue. * Instead, use the thread-safe {@link * java.util.concurrent.PriorityBlockingQueue} class. * * <p>Implementation note: this implementation provides * O(log(n)) time for the enqueing and dequeing methods * ({@code offer}, {@code poll}, {@code remove()} and {@code add}); * linear time for the {@code remove(Object)} and {@code contains(Object)} * methods; and constant time for the retrieval methods * ({@code peek}, {@code element}, and {@code size}). * * <p>This class is a member of the * <a href="{@docRoot}/../technotes/guides/collections/index.html"> * Java Collections Framework</a>. * * @since 1.5 * @version 1.16, 04/21/06 * @author Josh Bloch, Doug Lea * @param <E> the type of elements held in this collection */ public class PriorityQueue<E> extends AbstractQueue<E> implements java.io.Serializable { private static final long serialVersionUID = -7720805057305804111L; private static final int DEFAULT_INITIAL_CAPACITY = 11; /** * Priority queue represented as a balanced binary heap: the two * children of queue[n] are queue[2*n+1] and queue[2*(n+1)]. The * priority queue is ordered by comparator, or by the elements' * natural ordering, if comparator is null: For each node n in the * heap and each descendant d of n, n <= d. The element with the * lowest value is in queue[0], assuming the queue is nonempty. */ private transient Object[] queue; /** * The number of elements in the priority queue. */ private int size = 0; /** * The comparator, or null if priority queue uses elements' * natural ordering. */ private final Comparator<? super E> comparator; /** * The number of times this priority queue has been * <i>structurally modified</i>. See AbstractList for gory details. */ private transient int modCount = 0; /** * Creates a {@code PriorityQueue} with the default initial * capacity (11) that orders its elements according to their * {@linkplain Comparable natural ordering}. */ public PriorityQueue() { this(DEFAULT_INITIAL_CAPACITY, null); } /** * Creates a {@code PriorityQueue} with the specified initial * capacity that orders its elements according to their * {@linkplain Comparable natural ordering}. * * @param initialCapacity the initial capacity for this priority queue * @throws IllegalArgumentException if {@code initialCapacity} is less * than 1 */ public PriorityQueue(int initialCapacity) { this(initialCapacity, null); } /** * Creates a {@code PriorityQueue} with the specified initial capacity * that orders its elements according to the specified comparator. * * @param initialCapacity the initial capacity for this priority queue * @param comparator the comparator that will be used to order this * priority queue. If {@code null}, the {@linkplain Comparable * natural ordering} of the elements will be used. * @throws IllegalArgumentException if {@code initialCapacity} is * less than 1 */ public PriorityQueue(int initialCapacity, Comparator<? super E> comparator) { // Note: This restriction of at least one is not actually needed, // but continues for 1.5 compatibility if (initialCapacity < 1) throw new IllegalArgumentException(); this.queue = new Object[initialCapacity]; this.comparator = comparator; } /** * Creates a {@code PriorityQueue} containing the elements in the * specified collection. If the specified collection is an instance of * a {@link SortedSet} or is another {@code PriorityQueue}, this * priority queue will be ordered according to the same ordering. * Otherwise, this priority queue will be ordered according to the * {@linkplain Comparable natural ordering} of its elements. * * @param c the collection whose elements are to be placed * into this priority queue * @throws ClassCastException if elements of the specified collection * cannot be compared to one another according to the priority * queue's ordering * @throws NullPointerException if the specified collection or any * of its elements are null */ public PriorityQueue(Collection<? extends E> c) { initFromCollection(c); if (c instanceof SortedSet) comparator = (Comparator<? super E>) ((SortedSet<? extends E>)c).comparator(); else if (c instanceof PriorityQueue) comparator = (Comparator<? super E>) ((PriorityQueue<? extends E>)c).comparator(); else { comparator = null; heapify(); } } /** * Creates a {@code PriorityQueue} containing the elements in the * specified priority queue. This priority queue will be * ordered according to the same ordering as the given priority * queue. * * @param c the priority queue whose elements are to be placed * into this priority queue * @throws ClassCastException if elements of {@code c} cannot be * compared to one another according to {@code c}'s * ordering * @throws NullPointerException if the specified priority queue or any * of its elements are null */ public PriorityQueue(PriorityQueue<? extends E> c) { comparator = (Comparator<? super E>)c.comparator(); initFromCollection(c); } /** * Creates a {@code PriorityQueue} containing the elements in the * specified sorted set. This priority queue will be ordered * according to the same ordering as the given sorted set. * * @param c the sorted set whose elements are to be placed * into this priority queue * @throws ClassCastException if elements of the specified sorted * set cannot be compared to one another according to the * sorted set's ordering * @throws NullPointerException if the specified sorted set or any * of its elements are null */ public PriorityQueue(SortedSet<? extends E> c) { comparator = (Comparator<? super E>)c.comparator(); initFromCollection(c); } /** * Initializes queue array with elements from the given Collection. * * @param c the collection */ private void initFromCollection(Collection<? extends E> c) { Object[] a = c.toArray(); // If c.toArray incorrectly doesn't return Object[], copy it. if (a.getClass() != Object[].class) a = Arrays.copyOf(a, a.length, Object[].class); queue = a; size = a.length; } /** * Increases the capacity of the array. * * @param minCapacity the desired minimum capacity */ private void grow(int minCapacity) { if (minCapacity < 0) // overflow throw new OutOfMemoryError(); int oldCapacity = queue.length; // Double size if small; else grow by 50% int newCapacity = ((oldCapacity < 64)? ((oldCapacity + 1) * 2): ((oldCapacity / 2) * 3)); if (newCapacity < 0) // overflow newCapacity = Integer.MAX_VALUE; if (newCapacity < minCapacity) newCapacity = minCapacity; queue = Arrays.copyOf(queue, newCapacity); } /** * Inserts the specified element into this priority queue. * * @return {@code true} (as specified by {@link Collection#add}) * @throws ClassCastException if the specified element cannot be * compared with elements currently in this priority queue * according to the priority queue's ordering * @throws NullPointerException if the specified element is null */ public boolean add(E e) { return offer(e); } /** * Inserts the specified element into this priority queue. * * @return {@code true} (as specified by {@link Queue#offer}) * @throws ClassCastException if the specified element cannot be * compared with elements currently in this priority queue * according to the priority queue's ordering * @throws NullPointerException if the specified element is null */ public boolean offer(E e) { if (e == null) throw new NullPointerException(); modCount++; int i = size; if (i >= queue.length) grow(i + 1); size = i + 1; if (i == 0) queue[0] = e; else siftUp(i, e); return true; } public E peek() { if (size == 0) return null; return (E) queue[0]; } private int indexOf(Object o) { if (o != null) { for (int i = 0; i < size; i++) if (o.equals(queue[i])) return i; } return -1; } /** * Removes a single instance of the specified element from this queue, * if it is present. More formally, removes an element {@code e} such * that {@code o.equals(e)}, if this queue contains one or more such * elements. Returns {@code true} if and only if this queue contained * the specified element (or equivalently, if this queue changed as a * result of the call). * * @param o element to be removed from this queue, if present * @return {@code true} if this queue changed as a result of the call */ public boolean remove(Object o) { int i = indexOf(o); if (i == -1) return false; else { removeAt(i); return true; } } /** * Version of remove using reference equality, not equals. * Needed by iterator.remove. * * @param o element to be removed from this queue, if present * @return {@code true} if removed */ boolean removeEq(Object o) { for (int i = 0; i < size; i++) { if (o == queue[i]) { removeAt(i); return true; } } return false; } /** * Returns {@code true} if this queue contains the specified element. * More formally, returns {@code true} if and only if this queue contains * at least one element {@code e} such that {@code o.equals(e)}. * * @param o object to be checked for containment in this queue * @return {@code true} if this queue contains the specified element */ public boolean contains(Object o) { return indexOf(o) != -1; } /** * Returns an array containing all of the elements in this queue. * The elements are in no particular order. * * <p>The returned array will be "safe" in that no references to it are * maintained by this queue. (In other words, this method must allocate * a new array). The caller is thus free to modify the returned array. * * <p>This method acts as bridge between array-based and collection-based * APIs. * * @return an array containing all of the elements in this queue */ public Object[] toArray() { return Arrays.copyOf(queue, size); } /** * Returns an array containing all of the elements in this queue; the * runtime type of the returned array is that of the specified array. * The returned array elements are in no particular order. * If the queue fits in the specified array, it is returned therein. * Otherwise, a new array is allocated with the runtime type of the * specified array and the size of this queue. * * <p>If the queue fits in the specified array with room to spare * (i.e., the array has more elements than the queue), the element in * the array immediately following the end of the collection is set to * {@code null}. * * <p>Like the {@link #toArray()} method, this method acts as bridge between * array-based and collection-based APIs. Further, this method allows * precise control over the runtime type of the output array, and may, * under certain circumstances, be used to save allocation costs. * * <p>Suppose <tt>x</tt> is a queue known to contain only strings. * The following code can be used to dump the queue into a newly * allocated array of <tt>String</tt>: * * <pre> * String[] y = x.toArray(new String[0]);</pre> * * Note that <tt>toArray(new Object[0])</tt> is identical in function to * <tt>toArray()</tt>. * * @param a the array into which the elements of the queue are to * be stored, if it is big enough; otherwise, a new array of the * same runtime type is allocated for this purpose. * @return an array containing all of the elements in this queue * @throws ArrayStoreException if the runtime type of the specified array * is not a supertype of the runtime type of every element in * this queue * @throws NullPointerException if the specified array is null */ public <T> T[] toArray(T[] a) { if (a.length < size) // Make a new array of a's runtime type, but my contents: return (T[]) Arrays.copyOf(queue, size, a.getClass()); System.arraycopy(queue, 0, a, 0, size); if (a.length > size) a[size] = null; return a; } /** * Returns an iterator over the elements in this queue. The iterator * does not return the elements in any particular order. * * @return an iterator over the elements in this queue */ public Iterator<E> iterator() { return new Itr(); } private final class Itr implements Iterator<E> { /** * Index (into queue array) of element to be returned by * subsequent call to next. */ private int cursor = 0; /** * Index of element returned by most recent call to next, * unless that element came from the forgetMeNot list. * Set to -1 if element is deleted by a call to remove. */ private int lastRet = -1; /** * A queue of elements that were moved from the unvisited portion of * the heap into the visited portion as a result of "unlucky" element * removals during the iteration. (Unlucky element removals are those * that require a siftup instead of a siftdown.) We must visit all of * the elements in this list to complete the iteration. We do this * after we've completed the "normal" iteration. * * We expect that most iterations, even those involving removals, * will not need to store elements in this field. */ private ArrayDeque<E> forgetMeNot = null; /** * Element returned by the most recent call to next iff that * element was drawn from the forgetMeNot list. */ private E lastRetElt = null; /** * The modCount value that the iterator believes that the backing * Queue should have. If this expectation is violated, the iterator * has detected concurrent modification. */ private int expectedModCount = modCount; public boolean hasNext() { return cursor < size || (forgetMeNot != null && !forgetMeNot.isEmpty()); } public E next() { if (expectedModCount != modCount) throw new ConcurrentModificationException(); if (cursor < size) return (E) queue[lastRet = cursor++]; if (forgetMeNot != null) { lastRet = -1; lastRetElt = forgetMeNot.poll(); if (lastRetElt != null) return lastRetElt; } throw new NoSuchElementException(); } public void remove() { if (expectedModCount != modCount) throw new ConcurrentModificationException(); if (lastRet != -1) { E moved = PriorityQueue.this.removeAt(lastRet); lastRet = -1; if (moved == null) cursor--; else { if (forgetMeNot == null) forgetMeNot = new ArrayDeque<E>(); forgetMeNot.add(moved); } } else if (lastRetElt != null) { PriorityQueue.this.removeEq(lastRetElt); lastRetElt = null; } else { throw new IllegalStateException(); } expectedModCount = modCount; } } public int size() { return size; } /** * Removes all of the elements from this priority queue. * The queue will be empty after this call returns. */ public void clear() { modCount++; for (int i = 0; i < size; i++) queue[i] = null; size = 0; } public E poll() { if (size == 0) return null; int s = --size; modCount++; E result = (E) queue[0]; E x = (E) queue[s]; queue[s] = null; if (s != 0) siftDown(0, x); return result; } /** * Removes the ith element from queue. * * Normally this method leaves the elements at up to i-1, * inclusive, untouched. Under these circumstances, it returns * null. Occasionally, in order to maintain the heap invariant, * it must swap a later element of the list with one earlier than * i. Under these circumstances, this method returns the element * that was previously at the end of the list and is now at some * position before i. This fact is used by iterator.remove so as to * avoid missing traversing elements. */ private E removeAt(int i) { assert i >= 0 && i < size; modCount++; int s = --size; if (s == i) // removed last element queue[i] = null; else { E moved = (E) queue[s]; queue[s] = null; siftDown(i, moved); if (queue[i] == moved) { siftUp(i, moved); if (queue[i] != moved) return moved; } } return null; } /** * Inserts item x at position k, maintaining heap invariant by * promoting x up the tree until it is greater than or equal to * its parent, or is the root. * * To simplify and speed up coercions and comparisons. the * Comparable and Comparator versions are separated into different * methods that are otherwise identical. (Similarly for siftDown.) * * @param k the position to fill * @param x the item to insert */ private void siftUp(int k, E x) { if (comparator != null) siftUpUsingComparator(k, x); else siftUpComparable(k, x); } private void siftUpComparable(int k, E x) { Comparable<? super E> key = (Comparable<? super E>) x; while (k > 0) { int parent = (k - 1) >>> 1; Object e = queue[parent]; if (key.compareTo((E) e) >= 0) break; queue[k] = e; k = parent; } queue[k] = key; } private void siftUpUsingComparator(int k, E x) { while (k > 0) { int parent = (k - 1) >>> 1; Object e = queue[parent]; if (comparator.compare(x, (E) e) >= 0) break; queue[k] = e; k = parent; } queue[k] = x; } /** * Inserts item x at position k, maintaining heap invariant by * demoting x down the tree repeatedly until it is less than or * equal to its children or is a leaf. * * @param k the position to fill * @param x the item to insert */ private void siftDown(int k, E x) { if (comparator != null) siftDownUsingComparator(k, x); else siftDownComparable(k, x); } private void siftDownComparable(int k, E x) { Comparable<? super E> key = (Comparable<? super E>)x; int half = size >>> 1; // loop while a non-leaf while (k < half) { int child = (k << 1) + 1; // assume left child is least Object c = queue[child]; int right = child + 1; if (right < size && ((Comparable<? super E>) c).compareTo((E) queue[right]) > 0) c = queue[child = right]; if (key.compareTo((E) c) <= 0) break; queue[k] = c; k = child; } queue[k] = key; } private void siftDownUsingComparator(int k, E x) { int half = size >>> 1; while (k < half) { int child = (k << 1) + 1; Object c = queue[child]; int right = child + 1; if (right < size && comparator.compare((E) c, (E) queue[right]) > 0) c = queue[child = right]; if (comparator.compare(x, (E) c) <= 0) break; queue[k] = c; k = child; } queue[k] = x; } /** * Establishes the heap invariant (described above) in the entire tree, * assuming nothing about the order of the elements prior to the call. */ private void heapify() { for (int i = (size >>> 1) - 1; i >= 0; i--) siftDown(i, (E) queue[i]); } /** * Returns the comparator used to order the elements in this * queue, or {@code null} if this queue is sorted according to * the {@linkplain Comparable natural ordering} of its elements. * * @return the comparator used to order this queue, or * {@code null} if this queue is sorted according to the * natural ordering of its elements */ public Comparator<? super E> comparator() { return comparator; } /** * Saves the state of the instance to a stream (that * is, serializes it). * * @serialData The length of the array backing the instance is * emitted (int), followed by all of its elements * (each an {@code Object}) in the proper order. * @param s the stream */ private void writeObject(java.io.ObjectOutputStream s) throws java.io.IOException{ // Write out element count, and any hidden stuff s.defaultWriteObject(); // Write out array length, for compatibility with 1.5 version s.writeInt(Math.max(2, size + 1)); // Write out all elements in the "proper order". for (int i = 0; i < size; i++) s.writeObject(queue[i]); } /** * Reconstitutes the {@code PriorityQueue} instance from a stream * (that is, deserializes it). * * @param s the stream */ private void readObject(java.io.ObjectInputStream s) throws java.io.IOException, ClassNotFoundException { // Read in size, and any hidden stuff s.defaultReadObject(); // Read in (and discard) array length s.readInt(); queue = new Object[size]; // Read in all elements. for (int i = 0; i < size; i++) queue[i] = s.readObject(); // Elements are guaranteed to be in "proper order", but the // spec has never explained what that might be. heapify(); } }
// BinaryHeap class // // CONSTRUCTION: empty or with initial array. // // ******************PUBLIC OPERATIONS********************* // void insert( x ) --> Insert x // Comparable deleteMin( )--> Return and remove smallest item // Comparable findMin( ) --> Return smallest item // boolean isEmpty( ) --> Return true if empty; else false // void makeEmpty( ) --> Remove all items // ******************ERRORS******************************** // Throws UnderflowException for findMin and deleteMin when empty /** * Implements a binary heap. * Note that all "matching" is based on the compareTo method. * @author Mark Allen Weiss */ public class BinaryHeap implements PriorityQueue { /** * Construct the binary heap. */ public BinaryHeap( ) { currentSize = 0; array = new Comparable[ DEFAULT_CAPACITY + 1 ]; } /** * Construct the binary heap from an array. * @param items the inital items in the binary heap. */ public BinaryHeap( Comparable [ ] items ) { currentSize = items.length; array = new Comparable[ items.length + 1 ]; for( int i = 0; i < items.length; i++ ) array[ i + 1 ] = items[ i ]; buildHeap( ); } /** * Insert into the priority queue. * Duplicates are allowed. * @param x the item to insert. * @return null, signifying that decreaseKey cannot be used. */ public PriorityQueue.Position insert( Comparable x ) { if( currentSize + 1 == array.length ) doubleArray( ); // Percolate up int hole = ++currentSize; array[ 0 ] = x; for( ; x.compareTo( array[ hole / 2 ] ) < 0; hole /= 2 ) array[ hole ] = array[ hole / 2 ]; array[ hole ] = x; return null; } /** * @throws UnsupportedOperationException because no Positions are returned * by the insert method for BinaryHeap. */ public void decreaseKey( PriorityQueue.Position p, Comparable newVal ) { throw new UnsupportedOperationException( "Cannot use decreaseKey for binary heap" ); } /** * Find the smallest item in the priority queue. * @return the smallest item. * @throws UnderflowException if empty. */ public Comparable findMin( ) { if( isEmpty( ) ) throw new UnderflowException( "Empty binary heap" ); return array[ 1 ]; } /** * Remove the smallest item from the priority queue. * @return the smallest item. * @throws UnderflowException if empty. */ public Comparable deleteMin( ) { Comparable minItem = findMin( ); array[ 1 ] = array[ currentSize-- ]; percolateDown( 1 ); return minItem; } /** * Establish heap order property from an arbitrary * arrangement of items. Runs in linear time. */ private void buildHeap( ) { for( int i = currentSize / 2; i > 0; i-- ) percolateDown( i ); } /** * Test if the priority queue is logically empty. * @return true if empty, false otherwise. */ public boolean isEmpty( ) { return currentSize == 0; } /** * Returns size. * @return current size. */ public int size( ) { return currentSize; } /** * Make the priority queue logically empty. */ public void makeEmpty( ) { currentSize = 0; } private static final int DEFAULT_CAPACITY = 100; private int currentSize; // Number of elements in heap private Comparable [ ] array; // The heap array /** * Internal method to percolate down in the heap. * @param hole the index at which the percolate begins. */ private void percolateDown( int hole ) { int child; Comparable tmp = array[ hole ]; for( ; hole * 2 <= currentSize; hole = child ) { child = hole * 2; if( child != currentSize && array[ child + 1 ].compareTo( array[ child ] ) < 0 ) child++; if( array[ child ].compareTo( tmp ) < 0 ) array[ hole ] = array[ child ]; else break; } array[ hole ] = tmp; } /** * Internal method to extend array. */ private void doubleArray( ) { Comparable [ ] newArray; newArray = new Comparable[ array.length * 2 ]; for( int i = 0; i < array.length; i++ ) newArray[ i ] = array[ i ]; array = newArray; } // Test program public static void main( String [ ] args ) { int numItems = 10000; BinaryHeap h1 = new BinaryHeap( ); Integer [ ] items = new Integer[ numItems - 1 ]; int i = 37; int j; for( i = 37, j = 0; i != 0; i = ( i + 37 ) % numItems, j++ ) { h1.insert( new Integer( i ) ); items[ j ] = new Integer( i ); } for( i = 1; i < numItems; i++ ) if( ((Integer)( h1.deleteMin( ) )).intValue( ) != i ) System.out.println( "Oops! " + i ); BinaryHeap h2 = new BinaryHeap( items ); for( i = 1; i < numItems; i++ ) if( ((Integer)( h2.deleteMin( ) )).intValue( ) != i ) System.out.println( "Oops! " + i ); } } // PriorityQueue interface // // ******************PUBLIC OPERATIONS********************* // Position insert( x ) --> Insert x // Comparable deleteMin( )--> Return and remove smallest item // Comparable findMin( ) --> Return smallest item // boolean isEmpty( ) --> Return true if empty; else false // void makeEmpty( ) --> Remove all items // int size( ) --> Return size // void decreaseKey( p, v)--> Decrease value in p to v // ******************ERRORS******************************** // Throws UnderflowException for findMin and deleteMin when empty /** * PriorityQueue interface. * Some priority queues may support a decreaseKey operation, * but this is considered an advanced operation. If so, * a Position is returned by insert. * Note that all "matching" is based on the compareTo method. * @author Mark Allen Weiss */ public interface PriorityQueue { /** * The Position interface represents a type that can * be used for the decreaseKey operation. */ public interface Position { /** * Returns the value stored at this position. * @return the value stored at this position. */ Comparable getValue( ); } /** * Insert into the priority queue, maintaining heap order. * Duplicates are allowed. * @param x the item to insert. * @return may return a Position useful for decreaseKey. */ Position insert( Comparable x ); /** * Find the smallest item in the priority queue. * @return the smallest item. * @throws UnderflowException if empty. */ Comparable findMin( ); /** * Remove the smallest item from the priority queue. * @return the smallest item. * @throws UnderflowException if empty. */ Comparable deleteMin( ); /** * Test if the priority queue is logically empty. * @return true if empty, false otherwise. */ boolean isEmpty( ); /** * Make the priority queue logically empty. */ void makeEmpty( ); /** * Returns the size. * @return current size. */ int size( ); /** * Change the value of the item stored in the pairing heap. * This is considered an advanced operation and might not * be supported by all priority queues. A priority queue * will signal its intention to not support decreaseKey by * having insert return null consistently. * @param p any non-null Position returned by insert. * @param newVal the new value, which must be smaller * than the currently stored value. * @throws IllegalArgumentException if p invalid. * @throws UnsupportedOperationException if appropriate. */ void decreaseKey( Position p, Comparable newVal ); } /** * Exception class for access in empty containers * such as stacks, queues, and priority queues. * @author Mark Allen Weiss */ public class UnderflowException extends RuntimeException { /** * Construct this exception object. * @param message the error message. */ public UnderflowException( String message ) { super( message ); } }
发表评论
-
net-snmp相关
2012-08-06 17:39 750final TransportMapping ... -
IBM网站的一些帖子
2012-05-18 14:18 0http://www.ibm.com/developerwor ... -
freeradius-server-2.1.12.tar.bz2+suse10 64+Oracle11配置
2012-04-26 21:47 2106使用的环境 freeradius-server-2.1.12. ... -
<java并发编程实践>第七章读书笔记
2012-04-25 09:02 0s -
ssss
2012-04-19 17:25 0int main(void) { int soc ... -
linux socket
2012-04-19 17:23 0http://blog.csdn.net/stevexk/ar ... -
C语言基本知识点2
2012-04-11 21:05 880scanf() getchar() gets() ----- ... -
cccccc
2012-04-11 15:36 0strlen() scanf() printf() ge ... -
我的第一个和第二个C语言程序
2012-04-10 20:27 764花了我超过6个小时呢,不容易哦。 #include< ... -
C语言基本知识点
2012-04-10 08:47 802编译和链接 1.编译并链接一个完全包含于一个源文件的C程序 ... -
如何计算一个对象的大小
2012-04-09 14:36 0http://blog.csdn.net/hoszb/arti ... -
jvm调优
2012-03-29 19:21 0http://blog.csdn.net/cuker919/a ... -
linux c
2012-03-13 18:15 0http://blog.csdn.net/muge0913/a ... -
sss3
2012-03-10 11:00 0答案就在下面: 因为gnetAppService居然不是单例. ... -
sss2
2012-03-10 10:53 0at org.springframework.beans.fa ... -
ssss
2012-03-10 10:51 01、 因为两个环境一个可以正常启动,一个不正常。怀疑两个环境的 ... -
为最大发挥软件性能,建议对eclipse(eclipse.ini 文件)作以下配置
2012-02-29 20:17 0为最大发挥软件性能,建议对eclipse(eclipse.in ... -
尚珊珊
2012-02-25 13:08 0http://software.intel.com/zh-cn ... -
三种排序方法
2012-02-16 19:38 0http://blog.csdn.net/fenglibing ... -
安全8
2012-02-10 19:08 0安装时提供配置界面,用户可以修改FTP的密码支持的FTP密码修 ...
相关推荐
二、极板与流道 在DMFC中,极板是关键组件之一,通常由双极板构成,分别位于电池单元的两侧。它们不仅起到电导体的作用,还负责引导反应气体(如氧气和甲醇蒸汽)的流动,以及排除产生的水。流道设计在极板上,用于...
它依赖于优先队列来选择下一个要处理的节点,通常使用二分堆实现,时间复杂度为O((|V| + |E|) log|V|)。 5. **优先队列实现**:Dijkstra算法的核心是优先队列,用于存储待处理的节点并按照当前最短路径长度进行排序...
相关主题有:并查集算法,二分查找,栈,队列,背包,插入排序,选择排序,希尔排序,快速排序, 三切分快排,归并排序,堆排序,二分堆,二分查找树,红黑树,链表,线性哈希表,Graham扫描,kd树。 算法(二)...
二分搜索,也被称为二分查找,是一种在有序数组中查找特定元素的高效算法。它主要利用了分治的思想,将问题...同时,这个算法是许多高级数据结构和算法的基础,如二分插入排序、二分堆、以及在平衡二叉搜索树中查找等。
这里是部分数据结构的实现, 二分堆, 红黑树, Splay树, 图, Set, Collection ...还有很多未完成. 之后有时间的话继续完成其他数据结构实现, 并实现一些经典算法. 比如回溯, 动态规划, 贪心算法, 分治策略 ... 作为对...
需要注意的是,这个程序没有实现二分排序,因为二分排序通常是指使用二分插入排序或者二分堆等方法对数据进行排序。二分插入排序是一种改进的插入排序,它利用二分查找将新元素插入到已排序的序列中的合适位置;二...
二分堆常用于实现优先队列,提供O(logn)的插入和删除操作。常见的有最大堆和最小堆。 4. 多边形:多边形是计算机图形学的基础,涉及点与边的关系、凸包问题、向量运算等。可以使用旋转卡壳法等算法求解多边形的凸包...
二分堆是一种基于完全二叉树的数据结构,可以高效地实现优先队列。它有两种类型:最小堆和最大堆。在ACM竞赛中,二分堆常用于实现Dijkstra算法等最优化问题。 ### 4. 多边形与几何计算 多边形切割和半平面交是几何...
在本文中,我们将深入探讨Go语言实现的麻将算法,这些算法涵盖了牌墙生成、听牌判断、胡牌计算以及出牌推荐等核心功能。麻将作为一款策略性极强的多人桌游,其背后的算法设计至关重要,它直接影响到游戏的公平性和...
- 二分堆:实现优先队列的一种数据结构。 - 稳定婚姻匹配:Gale-Shapley算法。 - 后缀数组:处理字符串的有序集合。 - LCA(最近公共祖先):在树上寻找节点的最近公共祖先。 - FFT(快速傅里叶变换):用于快速计算...
易语言文本栈源码,文本栈,IsEmpty,IsFull,Clear,Push,Pop,Remalloc,设置内存增量,GetTop,GetBottom,GetData,进入许可区,离开许可区,InitializeCriticalSection_临界许可,DeleteCriticalSection_临界许可,...
此外,二分查找也是许多高级算法的基础,如二分插入排序、二分堆、最小堆、最大堆等。 总结来说,二分查找是一种高效的查找策略,通过不断地将查找区间对半分割,快速定位目标值的位置。其效率高、适用场景明确,是...
- 对于优先队列,可以使用二分堆、斐波那契堆等数据结构,但C标准库中没有内置这些高级数据结构,可能需要自定义实现。 - 为了处理无限大的距离,可以使用一个较大的整数(如INT_MAX)来代替,同时在算法中进行相应...
### 二期加油站预留地及进闸口绿化带堆场改造工程关键施工技术知识点 #### 一、施工方法详述 **1. 原道牙及连锁边块拆除** - **施工步骤**: 首先采用人工方式拆除盐田港进闸口及二期预留加油站堆场周边的原有道牙...
人工智能中博弈树对于分钱币问题的实现 问题描述: 有一定数量的钱币 两个人轮流分 但不能分成两个数量相等的堆 当轮到某一个人时 如果他无法再分堆 则他输
这份资料是针对小学二年级下学期数学学习的同步试题及答案解析,旨在帮助学生巩固和提升数学知识。试题涵盖逻辑推理、图形认知、几何变换、计数与组合等多个方面,旨在锻炼孩子们的思维能力和实际应用能力。 1. ...
当该对(x,y)无法用于训练时,此代码实现了从欠采样测量y中恢复图像x的功能。 但是,我们有少量来自y的地面真相x。 这对于通常无法访问所有器官的高分辨率数据的医学成像应用而言非常重要。 命令行 python3 wgancs...
- 例12: 趣味数学二的题目主要涉及策略和逻辑推理,如如何有效地组织运输和分配资源。 这些题目旨在训练二年级学生的基础数学技能,提升他们的逻辑思维能力,为更高年级的数学学习打下坚实基础。通过解决这些问题...