- 浏览: 497076 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (250)
- concurrent (11)
- io (1)
- CI (10)
- linux (57)
- windows (2)
- java (38)
- mac (4)
- eclipse (9)
- db (13)
- python (5)
- groovy (5)
- flex (7)
- hibernate (5)
- odb (8)
- netbeans (1)
- web (31)
- book (14)
- erlang (2)
- communication (2)
- virtualization (5)
- jUnit (0)
- jsf (1)
- perl (1)
- java jax-rs (5)
- Jenkins (2)
- Jenkins Plugin (3)
- android (2)
- git (1)
- big data (0)
- 试读 (1)
最新评论
-
yzzy4793:
讲的很清楚,明白
同步synchronized方法和代码块 -
aa51513:
中文乱码式硬伤
Jersey2.x对REST请求处理流程的分析 -
feiwomoshu1991:
...
同步synchronized方法和代码块 -
marshan:
启动失败的原因是加载的类版本冲突,因此你首先要保证依赖的版本和 ...
richfaces中facelet版本升级到2时的典型错误和解决办法 -
zhaohang6688:
请问我按照你的方式修改还是报错 错误信息还是这个 是为什么啊 ...
richfaces中facelet版本升级到2时的典型错误和解决办法
本类开发中 欢迎拍砖 重伤我者 必须答谢!
实现:
package creative.air.datastructure.map; import java.util.Iterator; import java.util.Map; import java.util.Set; import java.util.concurrent.TimeUnit; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantReadWriteLock; import java.util.concurrent.locks.ReentrantReadWriteLock.ReadLock; import java.util.concurrent.locks.ReentrantReadWriteLock.WriteLock; import java.util.logging.Level; import java.util.logging.Logger; import creative.air.datastructure.map.AirHashMap.AirEntry; /** * * @author Eric Han feuyeux@gmail.com 06/09/2012 * @since 0.0.1 * @version 0.0.1 */ public class HashMapCache<H, L> { enum ConcurrentStrategy { NOTIFY, WAIT, TIMEOUT } enum FullStrategy { NOTIFY, DISCARD, REPLACE } static final Logger logger = Logger.getLogger(HashMapCache.class.getName()); // cache full strategy private int capacity = 12; private FullStrategy fullStrategy = FullStrategy.NOTIFY; // cache lock strategy private static ReentrantReadWriteLock lock = new ReentrantReadWriteLock(); private static WriteLock wLock = lock.writeLock(); private static ReadLock rLock = lock.readLock(); private ConcurrentStrategy concurrentStrategy = ConcurrentStrategy.TIMEOUT; private long waitingLockTimeout = 500; private AirHashMap<H, L> map; public HashMapCache() { map = new AirHashMap<H, L>(); } public HashMapCache(int capacity) { this.capacity = capacity; map = new AirHashMap<H, L>(); } public HashMapCache(int capacity, int initialCapacity, float loadFactor) { this.capacity = capacity; map = new AirHashMap<H, L>(initialCapacity, loadFactor); } public void clear() { try { lock(wLock);// tryLock(long timeout, TimeUnit unit) map.clear(); } catch (Exception e) { logger.log(Level.SEVERE, "clear error", e); } finally { wLock.unlock(); } } public void remove(H key) { try { lock(wLock); map.remove(key); } catch (Exception e) { logger.log(Level.SEVERE, "clear error", e); } finally { wLock.unlock(); } } public L put(H key, L value) throws Exception { lock(wLock); if (this.capacity < map.size()) { switch (fullStrategy) { case NOTIFY: throw new Exception("100 reached the cache's maximum"); case DISCARD: return null; case REPLACE: // TODO it's a dangerous way // which cannot guarantee the data already stored in cache AirEntry<H, L> entry = map.getTable()[0]; remove(entry.getKey()); default: throw new Exception("100 reached the cache's maximum"); } } try { return map.put(key, value); } catch (Exception e) { logger.log(Level.SEVERE, "put error", e); return null; } finally { wLock.unlock(); } } public L get(H key) { try { lock(rLock); return map.get(key); } catch (Exception e) { logger.log(Level.SEVERE, "get error", e); return null; } finally { rLock.unlock(); } } public Iterator<Map.Entry<H, L>> iterate() { try { lock(rLock); return map.entrySet().iterator(); } catch (Exception e) { logger.log(Level.SEVERE, "get error", e); return null; } finally { rLock.unlock(); } } private void lock(Lock lock) throws Exception { switch (concurrentStrategy) { case NOTIFY: throw new Exception("200 Cannot control the cache"); case WAIT: lock.lock(); break; case TIMEOUT: lock.tryLock(waitingLockTimeout, TimeUnit.MICROSECONDS); break; } } public Set<Map.Entry<H, L>> entrySet() { return map.entrySet(); } public int getCapacity() { return capacity; } public ConcurrentStrategy getConcurrentStrategy() { return concurrentStrategy; } public FullStrategy getFullStrategy() { return fullStrategy; } public void setCapacity(int capacity) { this.capacity = capacity; } public void setConcurrentStrategy(ConcurrentStrategy concurrentStrategy) { this.concurrentStrategy = concurrentStrategy; } public void setFullStrategy(FullStrategy fullStrategy) { this.fullStrategy = fullStrategy; } public long getWaitingLockTimeout() { return waitingLockTimeout; } public void setWaitingLockTimeout(long waitingLockTimeout) { this.waitingLockTimeout = waitingLockTimeout; } }
hashmap:
package creative.air.datastructure.map; /** * * @author * Eric Han feuyeux@gmail.com * 16/09/2012 * @since 0.0.1 * @version 0.0.1 */ import java.io.IOException; import java.io.Serializable; import java.util.AbstractCollection; import java.util.AbstractMap; import java.util.AbstractSet; import java.util.ConcurrentModificationException; import java.util.Iterator; import java.util.Map; import java.util.NoSuchElementException; import java.util.Set; public class AirHashMap<K, V> extends AbstractMap<K, V> implements Map<K, V>, Cloneable, Serializable { private static final long serialVersionUID = 3476735979928755996L; static final int DEFAULT_INITIAL_CAPACITY = 16;// 初始容量 static final int MAXIMUM_CAPACITY = 1 << 30; static final float DEFAULT_LOAD_FACTOR = 0.75f;// 负载因子 transient AirEntry<K, V>[] table;// hash数组 transient int size; int threshold;// 阈值 final float loadFactor; transient int modCount;// 修改次数 public AirHashMap(int initialCapacity, float loadFactor) { if (initialCapacity < 0) throw new IllegalArgumentException("Illegal initial capacity: " + initialCapacity); if (initialCapacity > MAXIMUM_CAPACITY) initialCapacity = MAXIMUM_CAPACITY; if (loadFactor <= 0 || Float.isNaN(loadFactor)) throw new IllegalArgumentException("Illegal load factor: " + loadFactor); // Find a power of 2 >= initialCapacity int capacity = 1; while (capacity < initialCapacity) capacity <<= 1; this.loadFactor = loadFactor; threshold = (int) (capacity * loadFactor); table = new AirEntry[capacity]; init(); } public AirHashMap(int initialCapacity) { this(initialCapacity, DEFAULT_LOAD_FACTOR); } public AirHashMap() { this.loadFactor = DEFAULT_LOAD_FACTOR; threshold = (int) (DEFAULT_INITIAL_CAPACITY * DEFAULT_LOAD_FACTOR); table = new AirEntry[DEFAULT_INITIAL_CAPACITY]; init(); } public AirHashMap(Map<? extends K, ? extends V> m) { this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1, DEFAULT_INITIAL_CAPACITY), DEFAULT_LOAD_FACTOR); putAllForCreate(m); } void init() { } static int hash(int h) { // This function ensures that hashCodes that differ only by // constant multiples at each bit position have a bounded // number of collisions (approximately 8 at default load factor). h ^= (h >>> 20) ^ (h >>> 12); return h ^ (h >>> 7) ^ (h >>> 4); } static int indexFor(int h, int length) { return h & (length - 1); } public int size() { return size; } public boolean isEmpty() { return size == 0; } public V get(Object key) { if (key == null) return getForNullKey(); int hash = hash(key.hashCode()); for (AirEntry<K, V> e = table[indexFor(hash, table.length)]; e != null; e = e.next) { Object k; if (e.hash == hash && ((k = e.key) == key || key.equals(k))) return e.value; } return null; } private V getForNullKey() { for (AirEntry<K, V> e = table[0]; e != null; e = e.next) { if (e.key == null) return e.value; } return null; } public boolean containsKey(Object key) { return getEntry(key) != null; } final AirEntry<K, V> getEntry(Object key) { int hash = (key == null) ? 0 : hash(key.hashCode()); for (AirEntry<K, V> e = table[indexFor(hash, table.length)]; e != null; e = e.next) { Object k; if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) return e; } return null; } public V put(K key, V value) { if (key == null) return putForNullKey(value); int hash = hash(key.hashCode()); int i = indexFor(hash, table.length); for (AirEntry<K, V> e = table[i]; e != null; e = e.next) { Object k; if (e.hash == hash && ((k = e.key) == key || key.equals(k))) { V oldValue = e.value; e.value = value; e.recordAccess(this); return oldValue; } } modCount++; addEntry(hash, key, value, i); return null; } private V putForNullKey(V value) { for (AirEntry<K, V> e = table[0]; e != null; e = e.next) { if (e.key == null) { V oldValue = e.value; e.value = value; e.recordAccess(this); return oldValue; } } modCount++; addEntry(0, null, value, 0); return null; } private void putForCreate(K key, V value) { int hash = (key == null) ? 0 : hash(key.hashCode()); int i = indexFor(hash, table.length); /** * Look for preexisting entry for key. This will never happen for clone or deserialize. It will only happen for construction if the input Map is a * sorted map whose ordering is inconsistent w/ equals. */ for (AirEntry<K, V> e = table[i]; e != null; e = e.next) { Object k; if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) { e.value = value; return; } } createEntry(hash, key, value, i); } private void putAllForCreate(Map<? extends K, ? extends V> m) { for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) putForCreate(e.getKey(), e.getValue()); } void resize(int newCapacity) { AirEntry[] oldTable = table; int oldCapacity = oldTable.length; if (oldCapacity == MAXIMUM_CAPACITY) { threshold = Integer.MAX_VALUE; return; } AirEntry[] newTable = new AirEntry[newCapacity]; transfer(newTable); table = newTable; threshold = (int) (newCapacity * loadFactor); } void transfer(AirEntry[] newTable) { AirEntry[] src = table; int newCapacity = newTable.length; for (int j = 0; j < src.length; j++) { AirEntry<K, V> e = src[j]; if (e != null) { src[j] = null; do { AirEntry<K, V> next = e.next; int i = indexFor(e.hash, newCapacity); e.next = newTable[i]; newTable[i] = e; e = next; } while (e != null); } } } public void putAll(Map<? extends K, ? extends V> m) { int numKeysToBeAdded = m.size(); if (numKeysToBeAdded == 0) return; if (numKeysToBeAdded > threshold) { int targetCapacity = (int) (numKeysToBeAdded / loadFactor + 1); if (targetCapacity > MAXIMUM_CAPACITY) targetCapacity = MAXIMUM_CAPACITY; int newCapacity = table.length; while (newCapacity < targetCapacity) newCapacity <<= 1; if (newCapacity > table.length) resize(newCapacity); } for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) put(e.getKey(), e.getValue()); } public V remove(Object key) { AirEntry<K, V> e = removeEntryForKey(key); return (e == null ? null : e.value); } final AirEntry<K, V> removeEntryForKey(Object key) { int hash = (key == null) ? 0 : hash(key.hashCode()); int i = indexFor(hash, table.length); AirEntry<K, V> prev = table[i]; AirEntry<K, V> e = prev; while (e != null) { AirEntry<K, V> next = e.next; Object k; if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) { modCount++; size--; if (prev == e) table[i] = next; else prev.next = next; e.recordRemoval(this); return e; } prev = e; e = next; } return e; } final AirEntry<K, V> removeMapping(Object o) { if (!(o instanceof Map.Entry)) return null; Map.Entry<K, V> entry = (Map.Entry<K, V>) o; Object key = entry.getKey(); int hash = (key == null) ? 0 : hash(key.hashCode()); int i = indexFor(hash, table.length); AirEntry<K, V> prev = table[i]; AirEntry<K, V> e = prev; while (e != null) { AirEntry<K, V> next = e.next; if (e.hash == hash && e.equals(entry)) { modCount++; size--; if (prev == e) table[i] = next; else prev.next = next; e.recordRemoval(this); return e; } prev = e; e = next; } return e; } public void clear() { modCount++; AirEntry[] tab = table; for (int i = 0; i < tab.length; i++) tab[i] = null; size = 0; } public boolean containsValue(Object value) { if (value == null) return containsNullValue(); AirEntry[] tab = table; for (int i = 0; i < tab.length; i++) for (AirEntry e = tab[i]; e != null; e = e.next) if (value.equals(e.value)) return true; return false; } private boolean containsNullValue() { AirEntry[] tab = table; for (int i = 0; i < tab.length; i++) for (AirEntry e = tab[i]; e != null; e = e.next) if (e.value == null) return true; return false; } public Object clone() { AirHashMap<K, V> result = null; try { result = (AirHashMap<K, V>) super.clone(); } catch (CloneNotSupportedException e) { // assert false; } result.table = new AirEntry[table.length]; result.entrySet = null; result.modCount = 0; result.size = 0; result.init(); result.putAllForCreate(this); return result; } void addEntry(int hash, K key, V value, int bucketIndex) { AirEntry<K, V> e = table[bucketIndex]; table[bucketIndex] = new AirEntry<K, V>(hash, key, value, e); if (size++ >= threshold) resize(2 * table.length); } void createEntry(int hash, K key, V value, int bucketIndex) { AirEntry<K, V> e = table[bucketIndex]; table[bucketIndex] = new AirEntry<K, V>(hash, key, value, e); size++; } private final class ValueIterator extends HashIterator<V> { public V next() { return nextEntry().value; } } private final class KeyIterator extends HashIterator<K> { public K next() { return nextEntry().getKey(); } } private final class EntryIterator extends HashIterator<Map.Entry<K, V>> { public Map.Entry<K, V> next() { return nextEntry(); } } // Subclass overrides these to alter behavior of views' iterator() method Iterator<K> newKeyIterator() { return new KeyIterator(); } Iterator<V> newValueIterator() { return new ValueIterator(); } Iterator<Map.Entry<K, V>> newEntryIterator() { return new EntryIterator(); } private transient Set<Map.Entry<K, V>> entrySet = null; // public Set<K> keySet() { // Set<K> ks = keySet; // return (ks != null ? ks : (keySet = new KeySet())); // } private final class KeySet extends AbstractSet<K> { public Iterator<K> iterator() { return newKeyIterator(); } public int size() { return size; } public boolean contains(Object o) { return containsKey(o); } public boolean remove(Object o) { return AirHashMap.this.removeEntryForKey(o) != null; } public void clear() { AirHashMap.this.clear(); } } // public Collection<V> values() { // Collection<V> vs = values; // return (vs != null ? vs : (values = new Values())); // } private final class Values extends AbstractCollection<V> { public Iterator<V> iterator() { return newValueIterator(); } public int size() { return size; } public boolean contains(Object o) { return containsValue(o); } public void clear() { AirHashMap.this.clear(); } } public Set<Map.Entry<K, V>> entrySet() { return entrySet0(); } private Set<Map.Entry<K, V>> entrySet0() { Set<Map.Entry<K, V>> es = entrySet; return es != null ? es : (entrySet = new EntrySet()); } private final class EntrySet extends AbstractSet<Map.Entry<K, V>> { public Iterator<Map.Entry<K, V>> iterator() { return newEntryIterator(); } public boolean contains(Object o) { if (!(o instanceof Map.Entry)) return false; Map.Entry<K, V> e = (Map.Entry<K, V>) o; AirEntry<K, V> candidate = getEntry(e.getKey()); return candidate != null && candidate.equals(e); } public boolean remove(Object o) { return removeMapping(o) != null; } public int size() { return size; } public void clear() { AirHashMap.this.clear(); } } private void writeObject(java.io.ObjectOutputStream s) throws IOException { Iterator<Map.Entry<K, V>> i = (size > 0) ? entrySet0().iterator() : null; // Write out the threshold, loadfactor, and any hidden stuff s.defaultWriteObject(); // Write out number of buckets s.writeInt(table.length); // Write out size (number of Mappings) s.writeInt(size); // Write out keys and values (alternating) if (i != null) { while (i.hasNext()) { Map.Entry<K, V> e = i.next(); s.writeObject(e.getKey()); s.writeObject(e.getValue()); } } } private void readObject(java.io.ObjectInputStream s) throws IOException, ClassNotFoundException { // Read in the threshold, loadfactor, and any hidden stuff s.defaultReadObject(); // Read in number of buckets and allocate the bucket array; int numBuckets = s.readInt(); table = new AirEntry[numBuckets]; init(); // Give subclass a chance to do its thing. // Read in size (number of Mappings) int size = s.readInt(); // Read the keys and values, and put the mappings in the AirHashMap for (int i = 0; i < size; i++) { K key = (K) s.readObject(); V value = (V) s.readObject(); putForCreate(key, value); } } // These methods are used when serializing HashSets int capacity() { return table.length; } float loadFactor() { return loadFactor; } // ====Entry==== static class AirEntry<K, V> implements Map.Entry<K, V> { final K key; V value; AirEntry<K, V> next; final int hash; /** * Creates new entry. */ AirEntry(int h, K k, V v, AirEntry<K, V> n) { value = v; next = n; key = k; hash = h; } public final K getKey() { return key; } public final V getValue() { return value; } public final V setValue(V newValue) { V oldValue = value; value = newValue; return oldValue; } public final boolean equals(Object o) { if (!(o instanceof Map.Entry)) return false; Map.Entry e = (Map.Entry) o; Object k1 = getKey(); Object k2 = e.getKey(); if (k1 == k2 || (k1 != null && k1.equals(k2))) { Object v1 = getValue(); Object v2 = e.getValue(); if (v1 == v2 || (v1 != null && v1.equals(v2))) return true; } return false; } public final int hashCode() { return (key == null ? 0 : key.hashCode()) ^ (value == null ? 0 : value.hashCode()); } public final String toString() { return getKey() + "=" + getValue(); } /** * This method is invoked whenever the value in an entry is overwritten by an invocation of put(k,v) for a key k that's already in the AirHashMap. */ void recordAccess(AirHashMap<K, V> m) { } /** * This method is invoked whenever the entry is removed from the table. */ void recordRemoval(AirHashMap<K, V> m) { } } // ====HashIterator==== private abstract class HashIterator<E> implements Iterator<E> { AirEntry<K, V> next; // next entry to return int expectedModCount; // For fast-fail int index; // current slot AirEntry<K, V> current; // current entry HashIterator() { expectedModCount = modCount; if (size > 0) { // advance to first entry AirEntry[] t = table; while (index < t.length && (next = t[index++]) == null) ; } } public final boolean hasNext() { return next != null; } final AirEntry<K, V> nextEntry() { if (modCount != expectedModCount) throw new ConcurrentModificationException(); AirEntry<K, V> e = next; if (e == null) throw new NoSuchElementException(); if ((next = e.next) == null) { AirEntry[] t = table; while (index < t.length && (next = t[index++]) == null) ; } current = e; return e; } public void remove() { if (current == null) throw new IllegalStateException(); if (modCount != expectedModCount) throw new ConcurrentModificationException(); Object k = current.key; current = null; AirHashMap.this.removeEntryForKey(k); expectedModCount = modCount; } } public AirEntry<K, V>[] getTable() { return table; } }
测试:
package creative.air.datastructure.map; import java.util.HashMap; import java.util.Iterator; import java.util.Map; import java.util.logging.Level; import java.util.logging.Logger; import junit.framework.Assert; import org.junit.Test; /** * Test HashMap * 1 map:键值对 hash:hashcode * 2 非线程安全 键值允许为空 键为空时的处理 * 3 数组+链表结构 * 4 负载因子loadFactor和阈值threshold 扩容机制 缓存设计 * 5 遍历 * * Test Cache * @author * Eric Han feuyeux@gmail.com * 06/09/2012 * @since 0.0.1 * @version 0.0.1 */ public class HashMapTest { static final Logger logger = Logger.getLogger(HashMapTest.class.getName()); @Test public void test2() throws Exception { HashMapCache<String, Integer> cacheMap = new HashMapCache<String, Integer>(); cacheMap.clear(); cacheMap.put("" + 1, null); cacheMap.put(null, 1); cacheMap.put(null, null); Assert.assertNull(cacheMap.get(null)); } @Test public void test5() { int n = 0; final int maxium = 5000; HashMapCache<String, Integer> cacheMap = new HashMapCache<String, Integer>(maxium); HashMapCache<String, Integer> retrievingMap = new HashMapCache<String, Integer>(maxium, maxium / 10, 0.5f); boolean inputRight = true; while (n < maxium) { String s = "" + n; try { cacheMap.put(s, n); retrievingMap.put(s, n++); } catch (Exception e) { e.printStackTrace(); inputRight = false; break; } } Assert.assertTrue(inputRight); Object[] r1 = iterate(cacheMap, Level.INFO); Object[] r2 = iterate(retrievingMap, Level.INFO); logger.log(Level.INFO, "default map iterating elapse:{0}(start={1},end={2})", r1); logger.log(Level.INFO, "customize map iterating elapse:{0}(start={1},end={2})", r2); Assert.assertTrue((Long) r1[0] >= (Long) r2[0]); } /** * @param level * */ private Object[] iterate(HashMapCache<String, Integer> map, Level level) { Iterator<Map.Entry<String, Integer>> iter = map.entrySet().iterator(); long startTime = System.currentTimeMillis(); while (iter.hasNext()) { Map.Entry<String, Integer> entry = iter.next(); String key = entry.getKey(); Integer val = entry.getValue(); logger.log(level, "{0}:{1}", new Object[] { key, val }); } long endTime = System.currentTimeMillis(); return new Object[] { endTime - startTime, endTime, startTime }; } @Test public void testPutAll(){ HashMap<String,String> map=new HashMap<String,String>(); HashMap<String,HashMap<String,String>> map2=new HashMap<String,HashMap<String,String>>(); map.put("Gateway", "Thomson"); map2.put("patner", map); map.put("Gateway", "Technicolor"); Iterator<Map.Entry<String,HashMap<String,String>>> iter = map2.entrySet().iterator(); while (iter.hasNext()) { Map.Entry<String,HashMap<String,String>> entry = iter.next(); String key = entry.getKey(); HashMap<String,String> val = entry.getValue(); Iterator<Map.Entry<String,String>> iter2 = val.entrySet().iterator(); while (iter.hasNext()) { Map.Entry<String,String> entry2 = iter2.next(); String key2 = entry2.getKey(); String val2 = entry2.getValue(); Assert.assertEquals("Technicolor", val2); logger.log(Level.INFO, "{0}:{1}", new Object[] { key2, val2 }); } logger.log(Level.INFO, "{0}:{1}", new Object[] { key, val }); } } }
评论
16 楼
marshan
2012-09-26
Kilin 写道
使用这种锁的方式不能保证线程安全
请给出反例,谢谢。
15 楼
marshan
2012-09-26
montya 写道
montya 写道
get的时候,应该不需要加锁。
建议尝试使用CopyOnWriteArrayList.有空我也来写一个。
建议尝试使用CopyOnWriteArrayList.有空我也来写一个。
改进下,应该是concurrentHashMap
经过考虑,get也需要。如果另一个线程在get的同时做remove会有问题。
读-读并不互斥,读-写互斥。
14 楼
marshan
2012-09-26
m_lixn 写道
当集群的时候你这个缓存就该出问题了, 建议你在刷新缓存的时候 同步下其他容器下的信息.
第一阶段先不考虑集群。谢谢。
13 楼
marshan
2012-09-25
war0071 写道
借机学习一下ReentrantReadWriteLock类,写的还是不错的!不过public Iterator<Map.Entry<H, L>> iterate() {
return map.entrySet().iterator();
}
这个方法会出问题的,你用的HashMap实现,它的迭代器是快速失效的,也就是你这个方法得到的iterator不是线程安全的,当另一个线程修改这个Map的时候会,iterator会失效,如果有人正在操作它会出问题的
return map.entrySet().iterator();
}
这个方法会出问题的,你用的HashMap实现,它的迭代器是快速失效的,也就是你这个方法得到的iterator不是线程安全的,当另一个线程修改这个Map的时候会,iterator会失效,如果有人正在操作它会出问题的
我记得你说过这个问题。我找个时间继续这个工具类,你也可以分享一下你的成果。我的用意你懂的,相信你可以更好地实现它。
12 楼
war0071
2012-09-21
借机学习一下ReentrantReadWriteLock类,写的还是不错的!不过public Iterator<Map.Entry<H, L>> iterate() {
return map.entrySet().iterator();
}
这个方法会出问题的,你用的HashMap实现,它的迭代器是快速失效的,也就是你这个方法得到的iterator不是线程安全的,当另一个线程修改这个Map的时候会,iterator会失效,如果有人正在操作它会出问题的
return map.entrySet().iterator();
}
这个方法会出问题的,你用的HashMap实现,它的迭代器是快速失效的,也就是你这个方法得到的iterator不是线程安全的,当另一个线程修改这个Map的时候会,iterator会失效,如果有人正在操作它会出问题的
11 楼
marshan
2012-09-07
montya 写道
montya 写道
get的时候,应该不需要加锁。
建议尝试使用CopyOnWriteArrayList.有空我也来写一个。
建议尝试使用CopyOnWriteArrayList.有空我也来写一个。
改进下,应该是concurrentHashMap
目的在于练习hashmap。JDK1.6里有很多现成的类,我研究过。
10 楼
marshan
2012-09-07
thihy 写道
使用Google的Guava吧。
我写此文的目的在于加深对jdk 的理解。有空接受你的建议再看。
9 楼
marshan
2012-09-07
rockyzheng 写道
能告诉我你这个例子和JDK API中的有什么区别么?
写完了你就看到了,进行中
8 楼
marshan
2012-09-07
Kilin 写道
使用这种锁的方式不能保证线程安全
能具体给出解决方案么
7 楼
montya
2012-09-07
montya 写道
get的时候,应该不需要加锁。
建议尝试使用CopyOnWriteArrayList.有空我也来写一个。
建议尝试使用CopyOnWriteArrayList.有空我也来写一个。
改进下,应该是concurrentHashMap
6 楼
Kilin
2012-09-07
使用这种锁的方式不能保证线程安全
5 楼
m_lixn
2012-09-07
当集群的时候你这个缓存就该出问题了, 建议你在刷新缓存的时候 同步下其他容器下的信息.
4 楼
thihy
2012-09-07
使用Google的Guava吧。
3 楼
rockyzheng
2012-09-07
能告诉我你这个例子和JDK API中的有什么区别么?
2 楼
montya
2012-09-07
get的时候,应该不需要加锁。
建议尝试使用CopyOnWriteArrayList.有空我也来写一个。
建议尝试使用CopyOnWriteArrayList.有空我也来写一个。
1 楼
xihuyu2000
2012-09-07
好贴,收藏
发表评论
-
拥抱Java8第一弹
2014-01-17 11:59 2719package creative.air.java8.com ... -
Log4j2 整理
2013-12-09 12:25 808http://logging.apache.org/log4j ... -
profilers
2013-12-01 20:15 739SonarQube http://www.sonarqu ... -
JProfiler download
2013-12-01 20:01 1103Version: 8.0.1 (2013-07-31) h ... -
yjp download
2013-12-01 19:12 1089Download YourKit Java Profile ... -
Ubuntu中Java IDE启动器配置
2013-09-23 11:06 1421sudo nano /usr/share/applicati ... -
JAVA SYS TIME
2013-07-29 07:50 1058public class TestSys { //@Tes ... -
计划任务的顺序执行[Quartz Scheduler v.2.1.6]
2013-04-07 13:39 7790使用Quartz做计划任务时,默认情况下,当前任务总会执行 ... -
test list
2013-01-21 16:05 1029import java.util.ArrayList; p ... -
architect mark
2012-07-25 23:46 9641Z0_864 1Z0_865 1Z0 ... -
Windows下多版本Java并存问题
2012-05-01 23:59 9723跨平台的Java配置如下: C:\Users\Admi ... -
使用gitHub下载richfaces代码
2011-12-28 17:40 1542richfaces的源代码位于 https://github. ... -
jvm command
2011-10-18 22:05 1039/System/Library/Java/Java ... -
[童虎退壳系列]判等与哈希值的覆写
2011-10-13 01:57 1112public final class EqualsHas ... -
Java枚举
2011-06-22 10:00 867public enum TransportTy ... -
highcharts在richfaces下的实现
2011-01-12 19:40 3227highcharts是优秀的javascript图表生成工具( ... -
使用richfaces玩转json
2011-01-10 18:16 4439richfaces封装了jQuery和json,因此我们在ri ... -
JSF项目打包
2010-09-05 22:30 1176同事问我,JSF项目发布的时候,为什么编译后的类放在class ... -
SLF4J version conflict
2010-08-31 11:21 1644Here are the exception detail ... -
3大结构模式辨析
2010-05-03 15:57 1120首先看看维基百科中对7个结构模式的定义 from Wiki ...
相关推荐
接下来的部分内容则详细展示了如何利用HashMap实现缓存功能。 ### HashMap作为缓存机制 #### 基本概念与原理 HashMap是Java集合框架的一部分,它实现了Map接口,能够存储键值对数据。其内部使用哈希表来存储数据...
Java利用ConcurrentHashMap实现本地缓存demo; 基本功能有缓存有效期、缓存最大数、缓存存入记录、清理线程、过期算法删除缓存、LRU算法删除、获取缓存值等功能。 复制到本地项目的时候,记得改包路径哦~
本文将深入探讨如何使用Java Map实现缓存技术,以及其中的关键知识点。 首先,让我们理解什么是缓存。缓存是一种存储技术,用于暂时保存经常访问的数据,以便于快速检索。在Java中,我们通常使用HashMap、...
C#使用memCached实现缓存 Memcached 是一个高性能的分布式内存对象缓存系统,用于动态Web应用以减轻数据库负载。它通过在内存中缓存数据和对象来减少读取数据库的次数,从而提供动态、数据库驱动网站的速度。...
【QQ聊天程序与HashMap实现的信息存储】 QQ,作为一款广受欢迎的即时通讯软件,它的核心功能是实现用户之间的快速、实时通信。在这个简单的QQ聊天程序中,开发者利用了HashMap这一数据结构来存储和管理信息,以实现...
本文将详细介绍如何在Java中使用HashMap来实现数据缓存,并通过实例分析其读写操作。 首先,我们创建一个静态的HashMap实例来存储数据,例如: ```java private static final HashMap, XXX> sCache = new HashMap,...
1. 缓存机制:HashMap 可以用来实现缓存机制,例如缓存用户信息、缓存查询结果等。 2. 配置文件解析:HashMap 可以用来解析配置文件,例如将配置文件的键值对存储在 HashMap 中。 3. 数据统计:HashMap 可以用来统计...
在Java中,高速缓存的实现通常依赖于数据结构如哈希表(HashMap)或并发容器如ConcurrentHashMap。哈希表提供快速的查找和插入操作,而ConcurrentHashMap则为多线程环境提供了线程安全的访问。在这个项目中,对比了...
在 Java 中,缓存机制的实现可以通过各种方式,例如使用 HashMap、TreeMap、LinkedHashMap 等数据结构来存储缓存对象。下面是一个简单的缓存管理器的实现: CacheManager 类的实现: CacheManager 类是一个简单...
Java缓存技术是提高应用程序性能的关键工具,尤其是在处理大量数据时。...这个“Java缓存技术的使用实例”提供了一个动手学习的机会,加深了对缓存工作原理的理解,并为实际开发中的缓存使用提供了参考。
6. 使用接口而非实现类:在声明变量时,使用Map而非HashMap,这样在实际运行时可以更灵活地更换其他类型的Map,如LinkedHashMap,以改变元素的排序或性能特性。 CacheManager.java文件可能是一个用于管理缓存的类,...
HashMap重写实现 轻量级实现 使用自定义的轻量对象HashObjectMap替代jdk的HahMap HashMap里的Entry占用较大内存,可以用自己实现的轻量级容器替换,步骤如下: 1、 缓存的对象需要继承BaseHashObject /** * 这个类...
java Map实现的cache manager,定时清除缓存里面的值,使数据一致保持最新
HashMap基于哈希表实现,它的核心思想是通过哈希函数将键(key)转化为数组索引,快速定位到对应的值(value)。哈希函数确保了键的快速查找,但可能会出现哈希冲突,HashMap通过链地址法解决这个问题,即将相同哈希...
3. **内存缓存**:当新闻数据加载后,先将其存储在内存中(如使用HashMap),以供快速访问。内存缓存适用于短时间内的数据存取,但内存有限,所以需要合理控制缓存大小。 4. **硬盘缓存**:为了持久化存储数据,...
LRU缓存HashMap+双向链表实现,java版本,导入即用
基于软引用实现的缓存是一种优化策略,它能够帮助我们平衡性能和内存使用,防止因内存过度消耗而导致的OutOfMemoryError(OOM)。本篇文章将深入探讨软引用在缓存中的应用以及其工作原理。 软引用是Java中的一个...
在实际开发中,HashMap广泛应用于需要快速存取数据的场景,例如缓存、配置管理等。然而,由于其非线程安全的特性,对于多线程环境,通常需要考虑使用`ConcurrentHashMap`来保证并发访问的安全性。此外,还需注意内存...
同时,可以结合`java.util.HashMap`(Java)、`dict`(Python)或`Map`(JavaScript)等数据结构来实现缓存。 7. **性能优化**: 当大量使用XPath查询时,应考虑优化策略,比如编译XPath表达式以减少解析时间,或者...
而HashMap则适用于需要快速查找、插入和删除键值对,且不需要保持元素顺序的场合,如缓存、数据库映射等。 在实际编码中,创建ArrayList通常使用以下语法: ```java ArrayList<String> list = new ArrayList(); ...