- 浏览: 2651884 次
- 来自: 杭州
文章分类
- 全部博客 (1188)
- webwork (4)
- 网摘 (18)
- java (103)
- hibernate (1)
- Linux (85)
- 职业发展 (1)
- activeMQ (2)
- netty (14)
- svn (1)
- webx3 (12)
- mysql (81)
- css (1)
- HTML (6)
- apache (3)
- 测试 (2)
- javascript (1)
- 储存 (1)
- jvm (5)
- code (13)
- 多线程 (12)
- Spring (18)
- webxs (2)
- python (119)
- duitang (0)
- mongo (3)
- nosql (4)
- tomcat (4)
- memcached (20)
- 算法 (28)
- django (28)
- shell (1)
- 工作总结 (5)
- solr (42)
- beansdb (6)
- nginx (3)
- 性能 (30)
- 数据推荐 (1)
- maven (8)
- tonado (1)
- uwsgi (5)
- hessian (4)
- ibatis (3)
- Security (2)
- HTPP (1)
- gevent (6)
- 读书笔记 (1)
- Maxent (2)
- mogo (0)
- thread (3)
- 架构 (5)
- NIO (5)
- 正则 (1)
- lucene (5)
- feed (4)
- redis (17)
- TCP (6)
- test (0)
- python,code (1)
- PIL (3)
- guava (2)
- jython (4)
- httpclient (2)
- cache (3)
- signal (1)
- dubbo (7)
- HTTP (4)
- json (3)
- java socket (1)
- io (2)
- socket (22)
- hash (2)
- Cassandra (1)
- 分布式文件系统 (5)
- Dynamo (2)
- gc (8)
- scp (1)
- rsync (1)
- mecached (0)
- mongoDB (29)
- Thrift (1)
- scribe (2)
- 服务化 (3)
- 问题 (83)
- mat (1)
- classloader (2)
- javaBean (1)
- 文档集合 (27)
- 消息队列 (3)
- nginx,文档集合 (1)
- dboss (12)
- libevent (1)
- 读书 (0)
- 数学 (3)
- 流程 (0)
- HBase (34)
- 自动化测试 (1)
- ubuntu (2)
- 并发 (1)
- sping (1)
- 图形 (1)
- freemarker (1)
- jdbc (3)
- dbcp (0)
- sharding (1)
- 性能测试 (1)
- 设计模式 (2)
- unicode (1)
- OceanBase (3)
- jmagick (1)
- gunicorn (1)
- url (1)
- form (1)
- 安全 (2)
- nlp (8)
- libmemcached (1)
- 规则引擎 (1)
- awk (2)
- 服务器 (1)
- snmpd (1)
- btrace (1)
- 代码 (1)
- cygwin (1)
- mahout (3)
- 电子书 (1)
- 机器学习 (5)
- 数据挖掘 (1)
- nltk (6)
- pool (1)
- log4j (2)
- 总结 (11)
- c++ (1)
- java源代码 (1)
- ocr (1)
- 基础算法 (3)
- SA (1)
- 笔记 (1)
- ml (4)
- zokeeper (0)
- jms (1)
- zookeeper (5)
- zkclient (1)
- hadoop (13)
- mq (2)
- git (9)
- 问题,io (1)
- storm (11)
- zk (1)
- 性能优化 (2)
- example (1)
- tmux (1)
- 环境 (2)
- kyro (1)
- 日志系统 (3)
- hdfs (2)
- python_socket (2)
- date (2)
- elasticsearch (1)
- jetty (1)
- 树 (1)
- 汽车 (1)
- mdrill (1)
- 车 (1)
- 日志 (1)
- web (1)
- 编译原理 (1)
- 信息检索 (1)
- 性能,linux (1)
- spam (1)
- 序列化 (1)
- fabric (2)
- guice (1)
- disruptor (1)
- executor (1)
- logback (2)
- 开源 (1)
- 设计 (1)
- 监控 (3)
- english (1)
- 问题记录 (1)
- Bitmap (1)
- 云计算 (1)
- 问题排查 (1)
- highchat (1)
- mac (3)
- docker (1)
- jdk (1)
- 表达式 (1)
- 网络 (1)
- 时间管理 (1)
- 时间序列 (1)
- OLAP (1)
- Big Table (0)
- sql (1)
- kafka (1)
- md5 (1)
- springboot (1)
- spring security (1)
- Spring Boot (3)
- mybatis (1)
- java8 (1)
- 分布式事务 (1)
- 限流 (1)
- Shadowsocks (0)
- 2018 (1)
- 服务治理 (1)
- 设计原则 (1)
- log (0)
- perftools (1)
最新评论
-
siphlina:
课程——基于Python数据分析与机器学习案例实战教程分享网盘 ...
Python机器学习库 -
san_yun:
leibnitz 写道hi,我想知道,无论在92还是94版本, ...
hbase的行锁与多版本并发控制(MVCC) -
leibnitz:
hi,我想知道,无论在92还是94版本,更新时(如Puts)都 ...
hbase的行锁与多版本并发控制(MVCC) -
107x:
不错,谢谢!
Latent Semantic Analysis(LSA/ LSI)算法简介 -
107x:
不错,谢谢!
Python机器学习库
最近要为cat增为加一个top key统计,为了避免内存爆掉,希望能实现LRU,但又必须是线程安全的:
google的ConcurrentLinkedHashmap源代码解析
google的ConcurrentLinkedHashmap 源代码解析- Ken-专注后端技术
http://code.google.com/p/concurrentlinkedhashmap/
solr的实现:
package org.apache.solr.common.util; /** * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import org.apache.lucene.util.PriorityQueue; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.LinkedHashMap; import java.util.Map; import java.util.TreeSet; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.locks.ReentrantLock; import java.lang.ref.WeakReference; /** * A LRU cache implementation based upon ConcurrentHashMap and other techniques to reduce * contention and synchronization overhead to utilize multiple CPU cores more effectively. * <p/> * Note that the implementation does not follow a true LRU (least-recently-used) eviction * strategy. Instead it strives to remove least recently used items but when the initial * cleanup does not remove enough items to reach the 'acceptableWaterMark' limit, it can * remove more items forcefully regardless of access order. * * @version $Id: ConcurrentLRUCache.java 807872 2009-08-26 04:18:22Z hossman $ * @since solr 1.4 */ public class ConcurrentLRUCache<K,V> { private static Logger log = LoggerFactory.getLogger(ConcurrentLRUCache.class); private final ConcurrentHashMap<Object, CacheEntry> map; private final int upperWaterMark, lowerWaterMark; private final ReentrantLock markAndSweepLock = new ReentrantLock(true); private boolean isCleaning = false; // not volatile... piggybacked on other volatile vars private final boolean newThreadForCleanup; private volatile boolean islive = true; private final Stats stats = new Stats(); private final int acceptableWaterMark; private long oldestEntry = 0; // not volatile, only accessed in the cleaning method private final EvictionListener<K,V> evictionListener; private CleanupThread cleanupThread ; public ConcurrentLRUCache(int upperWaterMark, final int lowerWaterMark, int acceptableWatermark, int initialSize, boolean runCleanupThread, boolean runNewThreadForCleanup, EvictionListener<K,V> evictionListener) { if (upperWaterMark < 1) throw new IllegalArgumentException("upperWaterMark must be > 0"); if (lowerWaterMark >= upperWaterMark) throw new IllegalArgumentException("lowerWaterMark must be < upperWaterMark"); map = new ConcurrentHashMap<Object, CacheEntry>(initialSize); newThreadForCleanup = runNewThreadForCleanup; this.upperWaterMark = upperWaterMark; this.lowerWaterMark = lowerWaterMark; this.acceptableWaterMark = acceptableWatermark; this.evictionListener = evictionListener; if (runCleanupThread) { cleanupThread = new CleanupThread(this); cleanupThread.start(); } } public ConcurrentLRUCache(int size, int lowerWatermark) { this(size, lowerWatermark, (int) Math.floor((lowerWatermark + size) / 2), (int) Math.ceil(0.75 * size), false, false, null); } public void setAlive(boolean live) { islive = live; } public V get(K key) { CacheEntry<K,V> e = map.get(key); if (e == null) { if (islive) stats.missCounter.incrementAndGet(); return null; } if (islive) e.lastAccessed = stats.accessCounter.incrementAndGet(); return e.value; } public V remove(K key) { CacheEntry<K,V> cacheEntry = map.remove(key); if (cacheEntry != null) { stats.size.decrementAndGet(); return cacheEntry.value; } return null; } public Object put(K key, V val) { if (val == null) return null; CacheEntry e = new CacheEntry(key, val, stats.accessCounter.incrementAndGet()); CacheEntry oldCacheEntry = map.put(key, e); if (oldCacheEntry == null) { stats.size.incrementAndGet(); } if (islive) { stats.putCounter.incrementAndGet(); } else { stats.nonLivePutCounter.incrementAndGet(); } // Check if we need to clear out old entries from the cache. // isCleaning variable is checked instead of markAndSweepLock.isLocked() // for performance because every put invokation will check until // the size is back to an acceptable level. // // There is a race between the check and the call to markAndSweep, but // it's unimportant because markAndSweep actually aquires the lock or returns if it can't. // // Thread safety note: isCleaning read is piggybacked (comes after) other volatile reads // in this method. if (stats.size.get() > upperWaterMark && !isCleaning) { if (newThreadForCleanup) { new Thread() { public void run() { markAndSweep(); } }.start(); } else if (cleanupThread != null){ cleanupThread.wakeThread(); } else { markAndSweep(); } } return oldCacheEntry == null ? null : oldCacheEntry.value; } /** * Removes items from the cache to bring the size down * to an acceptable value ('acceptableWaterMark'). * <p/> * It is done in two stages. In the first stage, least recently used items are evicted. * If, after the first stage, the cache size is still greater than 'acceptableSize' * config parameter, the second stage takes over. * <p/> * The second stage is more intensive and tries to bring down the cache size * to the 'lowerWaterMark' config parameter. */ private void markAndSweep() { // if we want to keep at least 1000 entries, then timestamps of // current through current-1000 are guaranteed not to be the oldest (but that does // not mean there are 1000 entries in that group... it's acutally anywhere between // 1 and 1000). // Also, if we want to remove 500 entries, then // oldestEntry through oldestEntry+500 are guaranteed to be // removed (however many there are there). if (!markAndSweepLock.tryLock()) return; try { long oldestEntry = this.oldestEntry; isCleaning = true; this.oldestEntry = oldestEntry; // volatile write to make isCleaning visible long timeCurrent = stats.accessCounter.get(); int sz = stats.size.get(); int numRemoved = 0; int numKept = 0; long newestEntry = timeCurrent; long newNewestEntry = -1; long newOldestEntry = Integer.MAX_VALUE; int wantToKeep = lowerWaterMark; int wantToRemove = sz - lowerWaterMark; CacheEntry<K,V>[] eset = new CacheEntry[sz]; int eSize = 0; // System.out.println("newestEntry="+newestEntry + " oldestEntry="+oldestEntry); // System.out.println("items removed:" + numRemoved + " numKept=" + numKept + " esetSz="+ eSize + " sz-numRemoved=" + (sz-numRemoved)); for (CacheEntry<K,V> ce : map.values()) { // set lastAccessedCopy to avoid more volatile reads ce.lastAccessedCopy = ce.lastAccessed; long thisEntry = ce.lastAccessedCopy; // since the wantToKeep group is likely to be bigger than wantToRemove, check it first if (thisEntry > newestEntry - wantToKeep) { // this entry is guaranteed not to be in the bottom // group, so do nothing. numKept++; newOldestEntry = Math.min(thisEntry, newOldestEntry); } else if (thisEntry < oldestEntry + wantToRemove) { // entry in bottom group? // this entry is guaranteed to be in the bottom group // so immediately remove it from the map. evictEntry(ce.key); numRemoved++; } else { // This entry *could* be in the bottom group. // Collect these entries to avoid another full pass... this is wasted // effort if enough entries are normally removed in this first pass. // An alternate impl could make a full second pass. if (eSize < eset.length-1) { eset[eSize++] = ce; newNewestEntry = Math.max(thisEntry, newNewestEntry); newOldestEntry = Math.min(thisEntry, newOldestEntry); } } } // System.out.println("items removed:" + numRemoved + " numKept=" + numKept + " esetSz="+ eSize + " sz-numRemoved=" + (sz-numRemoved)); // TODO: allow this to be customized in the constructor? int numPasses=1; // maximum number of linear passes over the data // if we didn't remove enough entries, then make more passes // over the values we collected, with updated min and max values. while (sz - numRemoved > acceptableWaterMark && --numPasses>=0) { oldestEntry = newOldestEntry == Integer.MAX_VALUE ? oldestEntry : newOldestEntry; newOldestEntry = Integer.MAX_VALUE; newestEntry = newNewestEntry; newNewestEntry = -1; wantToKeep = lowerWaterMark - numKept; wantToRemove = sz - lowerWaterMark - numRemoved; // iterate backward to make it easy to remove items. for (int i=eSize-1; i>=0; i--) { CacheEntry<K,V> ce = eset[i]; long thisEntry = ce.lastAccessedCopy; if (thisEntry > newestEntry - wantToKeep) { // this entry is guaranteed not to be in the bottom // group, so do nothing but remove it from the eset. numKept++; // remove the entry by moving the last element to it's position eset[i] = eset[eSize-1]; eSize--; newOldestEntry = Math.min(thisEntry, newOldestEntry); } else if (thisEntry < oldestEntry + wantToRemove) { // entry in bottom group? // this entry is guaranteed to be in the bottom group // so immediately remove it from the map. evictEntry(ce.key); numRemoved++; // remove the entry by moving the last element to it's position eset[i] = eset[eSize-1]; eSize--; } else { // This entry *could* be in the bottom group, so keep it in the eset, // and update the stats. newNewestEntry = Math.max(thisEntry, newNewestEntry); newOldestEntry = Math.min(thisEntry, newOldestEntry); } } // System.out.println("items removed:" + numRemoved + " numKept=" + numKept + " esetSz="+ eSize + " sz-numRemoved=" + (sz-numRemoved)); } // if we still didn't remove enough entries, then make another pass while // inserting into a priority queue if (sz - numRemoved > acceptableWaterMark) { oldestEntry = newOldestEntry == Integer.MAX_VALUE ? oldestEntry : newOldestEntry; newOldestEntry = Integer.MAX_VALUE; newestEntry = newNewestEntry; newNewestEntry = -1; wantToKeep = lowerWaterMark - numKept; wantToRemove = sz - lowerWaterMark - numRemoved; PQueue queue = new PQueue(wantToRemove); for (int i=eSize-1; i>=0; i--) { CacheEntry<K,V> ce = eset[i]; long thisEntry = ce.lastAccessedCopy; if (thisEntry > newestEntry - wantToKeep) { // this entry is guaranteed not to be in the bottom // group, so do nothing but remove it from the eset. numKept++; // removal not necessary on last pass. // eset[i] = eset[eSize-1]; // eSize--; newOldestEntry = Math.min(thisEntry, newOldestEntry); } else if (thisEntry < oldestEntry + wantToRemove) { // entry in bottom group? // this entry is guaranteed to be in the bottom group // so immediately remove it. evictEntry(ce.key); numRemoved++; // removal not necessary on last pass. // eset[i] = eset[eSize-1]; // eSize--; } else { // This entry *could* be in the bottom group. // add it to the priority queue // everything in the priority queue will be removed, so keep track of // the lowest value that ever comes back out of the queue. // first reduce the size of the priority queue to account for // the number of items we have already removed while executing // this loop so far. queue.myMaxSize = sz - lowerWaterMark - numRemoved; while (queue.size() > queue.myMaxSize && queue.size() > 0) { CacheEntry otherEntry = (CacheEntry) queue.pop(); newOldestEntry = Math.min(otherEntry.lastAccessedCopy, newOldestEntry); } if (queue.myMaxSize <= 0) break; Object o = queue.myInsertWithOverflow(ce); if (o != null) { newOldestEntry = Math.min(((CacheEntry)o).lastAccessedCopy, newOldestEntry); } } } // Now delete everything in the priority queue. // avoid using pop() since order doesn't matter anymore for (Object o : queue.getValues()) { if (o==null) continue; CacheEntry<K,V> ce = (CacheEntry)o; evictEntry(ce.key); numRemoved++; } // System.out.println("items removed:" + numRemoved + " numKept=" + numKept + " initialQueueSize="+ wantToRemove + " finalQueueSize=" + queue.size() + " sz-numRemoved=" + (sz-numRemoved)); } oldestEntry = newOldestEntry == Integer.MAX_VALUE ? oldestEntry : newOldestEntry; this.oldestEntry = oldestEntry; } finally { isCleaning = false; // set before markAndSweep.unlock() for visibility markAndSweepLock.unlock(); } } private static class PQueue extends PriorityQueue { int myMaxSize; PQueue(int maxSz) { super.initialize(maxSz); myMaxSize = maxSz; } Object[] getValues() { return heap; } protected boolean lessThan(Object a, Object b) { // reverse the parameter order so that the queue keeps the oldest items return ((CacheEntry)b).lastAccessedCopy < ((CacheEntry)a).lastAccessedCopy; } // necessary because maxSize is private in base class public Object myInsertWithOverflow(Object element) { if (size() < myMaxSize) { put(element); return null; } else if (size() > 0 && !lessThan(element, heap[1])) { Object ret = heap[1]; heap[1] = element; adjustTop(); return ret; } else { return element; } } } private void evictEntry(K key) { CacheEntry<K,V> o = map.remove(key); if (o == null) return; stats.size.decrementAndGet(); stats.evictionCounter++; if(evictionListener != null) evictionListener.evictedEntry(o.key,o.value); } /** * Returns 'n' number of oldest accessed entries present in this cache. * * This uses a TreeSet to collect the 'n' oldest items ordered by ascending last access time * and returns a LinkedHashMap containing 'n' or less than 'n' entries. * @param n the number of oldest items needed * @return a LinkedHashMap containing 'n' or less than 'n' entries */ public Map<K, V> getOldestAccessedItems(int n) { markAndSweepLock.lock(); Map<K, V> result = new LinkedHashMap<K, V>(); TreeSet<CacheEntry> tree = new TreeSet<CacheEntry>(); try { for (Map.Entry<Object, CacheEntry> entry : map.entrySet()) { CacheEntry ce = entry.getValue(); ce.lastAccessedCopy = ce.lastAccessed; if (tree.size() < n) { tree.add(ce); } else { if (ce.lastAccessedCopy < tree.first().lastAccessedCopy) { tree.remove(tree.first()); tree.add(ce); } } } } finally { markAndSweepLock.unlock(); } for (CacheEntry<K, V> e : tree) { result.put(e.key, e.value); } return result; } public Map<K,V> getLatestAccessedItems(int n) { // we need to grab the lock since we are changing lastAccessedCopy markAndSweepLock.lock(); Map<K,V> result = new LinkedHashMap<K,V>(); TreeSet<CacheEntry> tree = new TreeSet<CacheEntry>(); try { for (Map.Entry<Object, CacheEntry> entry : map.entrySet()) { CacheEntry ce = entry.getValue(); ce.lastAccessedCopy = ce.lastAccessed; if (tree.size() < n) { tree.add(ce); } else { if (ce.lastAccessedCopy > tree.last().lastAccessedCopy) { tree.remove(tree.last()); tree.add(ce); } } } } finally { markAndSweepLock.unlock(); } for (CacheEntry<K,V> e : tree) { result.put(e.key, e.value); } return result; } public int size() { return stats.size.get(); } public void clear() { map.clear(); } public Map<Object, CacheEntry> getMap() { return map; } private static class CacheEntry<K,V> implements Comparable<CacheEntry> { K key; V value; volatile long lastAccessed = 0; long lastAccessedCopy = 0; public CacheEntry(K key, V value, long lastAccessed) { this.key = key; this.value = value; this.lastAccessed = lastAccessed; } public void setLastAccessed(long lastAccessed) { this.lastAccessed = lastAccessed; } public int compareTo(CacheEntry that) { if (this.lastAccessedCopy == that.lastAccessedCopy) return 0; return this.lastAccessedCopy < that.lastAccessedCopy ? 1 : -1; } public int hashCode() { return value.hashCode(); } public boolean equals(Object obj) { return value.equals(obj); } public String toString() { return "key: " + key + " value: " + value + " lastAccessed:" + lastAccessed; } } private boolean isDestroyed = false; public void destroy() { try { if(cleanupThread != null){ cleanupThread.stopThread(); } } finally { isDestroyed = true; } } public Stats getStats() { return stats; } public static class Stats { private final AtomicLong accessCounter = new AtomicLong(0), putCounter = new AtomicLong(0), nonLivePutCounter = new AtomicLong(0), missCounter = new AtomicLong(); private final AtomicInteger size = new AtomicInteger(); private long evictionCounter = 0; public long getCumulativeLookups() { return (accessCounter.get() - putCounter.get() - nonLivePutCounter.get()) + missCounter.get(); } public long getCumulativeHits() { return accessCounter.get() - putCounter.get() - nonLivePutCounter.get(); } public long getCumulativePuts() { return putCounter.get(); } public long getCumulativeEvictions() { return evictionCounter; } public int getCurrentSize() { return size.get(); } public long getCumulativeNonLivePuts() { return nonLivePutCounter.get(); } public long getCumulativeMisses() { return missCounter.get(); } } public static interface EvictionListener<K,V>{ public void evictedEntry(K key, V value); } private static class CleanupThread extends Thread { private WeakReference<ConcurrentLRUCache> cache; private boolean stop = false; public CleanupThread(ConcurrentLRUCache c) { cache = new WeakReference<ConcurrentLRUCache>(c); } public void run() { while (true) { synchronized (this) { if (stop) break; try { this.wait(); } catch (InterruptedException e) {} } if (stop) break; ConcurrentLRUCache c = cache.get(); if(c == null) break; c.markAndSweep(); } } void wakeThread() { synchronized(this){ this.notify(); } } void stopThread() { synchronized(this){ stop=true; this.notify(); } } } protected void finalize() throws Throwable { try { if(!isDestroyed){ log.error("ConcurrentLRUCache was not destroyed prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!!"); destroy(); } } finally { super.finalize(); } } }
相关推荐
线程安全的LRUCache实现会相对复杂,涉及到`ReentrantLock`或者`java.util.concurrent`包中的其他并发工具类。在实际项目中,可以考虑使用Google的Guava库,它提供了现成的线程安全的`LRUCache`实现。 总之,...
在Java中实现LRUCache,我们可以利用Java 8引入的`java.util.concurrent.ConcurrentHashMap`和`java.util.LinkedHashMap`类来构建高效且线程安全的缓存结构。 首先,`ConcurrentHashMap`提供了线程安全的哈希映射,...
2. Java 内部集合类的实现机制:LinkedHashMap 的数据结构实现,LruCache 或 FastLruCache 的实现,优先级队列的实现, 有界队列和无界队列,无锁队列等。 多线程知识点 1. 线程同步问题:使用 synchronized ...
内存缓存通常使用`LRUCache`(最近最少使用)或其他高效的内存数据结构,它可以快速地存储和检索图片,但内存有限,因此需要合理控制缓存大小。磁盘缓存则利用设备的存储空间,持久化存储图片,即使应用关闭后,下次...
此外,Go标准库并未提供内置的LRU实现,但社区中有许多成熟的第三方库,如github.com/orcaman/concurrent-lru,可以更方便地在项目中使用LRU缓存。 总结来说,"go代码-go lru"描述的是使用Go语言实现LRU缓存策略的...
在多线程环境下,如果需要线程安全的LRU缓存,可以使用`Collections.synchronizedMap()`对`LRUCache`进行包装,或者使用`java.util.concurrent`包下的并发容器,如`ConcurrentSkipListMap`等。 总的来说,LRU缓存...
这种异常通常表现为`java.util.concurrent.RejectedExecutionException`,意味着线程池无法接受新的任务,因为它已经达到了最大容量。为了避免这种情况,开发者需要考虑以下几点: 1. **合理控制并发任务数量**:...
在Android中,我们可以使用`java.util.concurrent`包下的`ExecutorService`接口及其实现类来创建线程池。例如,`ThreadPoolExecutor`允许我们自定义核心线程数、最大线程数、线程存活时间等参数,从而更灵活地管理...
在Java编程中,缓存(Cache)是一种常用的...此外,Java 8以后提供了`java.util.concurrent`包下的`ConcurrentHashMap`,可以结合`Guava`库或其他第三方库,如`Caffeine`,以构建更高效、功能更全面的缓存解决方案。
在Android中,我们通常使用`java.util.concurrent`包下的`ExecutorService`和`ThreadPoolExecutor`来创建线程池。线程池可以避免频繁创建和销毁线程带来的性能开销,提高系统资源的利用率。通过合理配置线程池的参数...
- Android中的线程池是通过`java.util.concurrent`包下的`ExecutorService`来实现的。线程池可以有效地管理多个并发任务,避免频繁创建和销毁线程带来的性能开销。 - 使用`ThreadPoolExecutor`,我们可以定制...
这些功能通常基于Java的`java.io`和`java.nio`包,以及Android特有的`java.util.concurrent`包来实现异步操作。 2. **目录与文件遍历** 在Android中,使用`java.io.File`类来表示文件和目录。FileManager通过递归...
在Java中,`java.util.concurrent.ConcurrentLinkedHashMap`是线程安全的LRU实现,适用于多线程环境。 **优缺点** LRU算法的优点在于其高效性和简单性,能够快速定位和移除最近最少使用的数据。然而,它也存在一定...
2. **在Android中使用线程池**:可以使用`java.util.concurrent`包下的`ExecutorService`接口及其实现类如`ThreadPoolExecutor`。通过配置核心线程数、最大线程数、线程存活时间等参数,可以有效地管理并发任务。 3...
在Java中,可以使用`java.util.concurrent`包下的`ExecutorService`和`Future`接口来管理线程池和任务执行。通过创建多个线程,每个线程负责下载文件的一部分,从而提高下载效率。 - **Xutils框架**: 文件`7.多线程...
在实际应用中,如果要在多线程环境中使用,可能需要使用`java.util.concurrent`包中的工具进行同步控制,比如`ConcurrentHashMap`和`synchronized`关键字。 总的来说,LRU缓存是一种优化数据访问的关键技术,尤其...
在Java中,实现LRU缓存可以借助于`java.util.concurrent`包中的`LinkedHashMap`类。`LinkedHashMap`提供了按访问顺序或插入顺序维护元素的顺序的能力,这正是我们实现LRU策略所需要的。通过设置`accessOrder`为`true...
通常,它会包含一个`LRUCache`结构体,其中包含一个`Arc, Value>>>`来存储数据,以及一个`Arc<RwLock<Vec<Key>>>`来跟踪最近使用过的键的顺序。 5. **API设计**: 一个理想的并发LRU缓存库应该提供简单易用的API,...