1> MapMaker for creating ConcurrentMap instances.
2> CacheBuilder for creating LoadingCache and Cache instances.
3> CacheBuilderSpec for creating CacheBuilder instance from a formatted string
4> CacheLoader that is used by a LoadingCache instance to retrieve a single value for a given key
5> CacheStats that provides statistics of the performance of the cache
6> RemovalListener that receives notifications when an entry has been removed from the cache
1> MapMaker
@Test public void makeMapTest() { Map<Object, Object> map = new MapMaker().concurrencyLevel(2) .weakValues().weakKeys().makeMap(); map.put(new Object(), new Object()); }
Q: What's the meaning of "concurrencyLevel"?
Q: What's the benefit of using "weakValues" and "weakKeys"?
2> Cache & LoadingCache
1. Cache
public interface Cache<K, V> { void put(K key, V value); @Nullable V getIfPresent(Object key); V get(K key, Callable<? extends V> valueLoader) throws ExecutionException; /** * Returns a map of the values associated with {@code keys} in this cache. The returned map will * only contain entries which are already present in the cache. */ ImmutableMap<K, V> getAllPresent(Iterable<?> keys); void invalidate(Object key); void invalidateAll(Iterable<?> keys); void invalidateAll(); ConcurrentMap<K, V> asMap(); }
1) get & getIfPresent
@Test(expected = InvalidCacheLoadException.class) public void getTest() throws ExecutionException { Cache<String, String> cache = CacheBuilder.newBuilder().build(); cache.put("KEY_1", "VALUE_1"); String value = cache.getIfPresent("KEY_2"); assertNull(value); value = cache.get("KEY_2", new Callable<String>() { public String call() throws Exception { return "VALUE_2"; } }); assertEquals("VALUE_2", value); value = cache.getIfPresent("KEY_2"); assertEquals("VALUE_2", value); value = cache.get("KEY_2", new Callable<String>() { public String call() throws Exception { return null; } }); assertEquals("VALUE_2", value); cache.invalidate("KEY_2"); value = cache.get("KEY_2", new Callable<String>() { public String call() throws Exception { return null; // InvalidCacheLoadException would be thrown } }); }
The logic of cache.get(key, valueLoader) is:
1> Find corresoponding value in cache with provided key
2> If we can find its value, then the value is returned, valueLoader will NEVER be invoked.
3> If we cannot find its value, then the valueLoader will be invoked to get the value.
1> If valueLoader returns null, then CacheLoader$InvalidCacheLoadException will be thrown.
2> If valueLoader returns not null value, then the value would be returned and then key/value pair would be stored in cache at the same time.
Thus the thumbnail principle is that DO NOT RETURN NULL IN VALUELOADER.
If we want return null if not cannot find its corresponding value, then use getIfPresent(key) instead.
But the Callable implies that valueLoader is executed asynchronously, but what do we do if we don't need/want to execute an asynchronous task?
@Test public void getTest() throws ExecutionException, InterruptedException { Cache<String, String> cache = CacheBuilder.newBuilder().build(); Callable<String> callable = new Callable<String>() { @Override public String call() throws Exception { System.out.println("Thread: " + Thread.currentThread()); Thread.sleep(1000); return "VALUE_" + System.currentTimeMillis(); } }; System.out.println(System.currentTimeMillis()); String value = cache.get("KEY_1", callable); System.out.println(System.currentTimeMillis()); System.out.println(value); value = cache.getIfPresent("KEY_1"); System.out.println(System.currentTimeMillis()); System.out.println(value); } // output: // 1409031531671 // Thread: Thread[main,5,main] // 1409031532699 // VALUE_1409031532684 // 1409031532699 // VALUE_1409031532684Q: It seems the callable is still executed in main thread, how can we start valueLoader asynchronously??
@Test public void syncGetTest() throws ExecutionException { Cache<String, String> cache = CacheBuilder.newBuilder().build(); System.out.println(System.currentTimeMillis()); String value = cache.get("KEY_1", Callables.returning("VALUE_" + System.currentTimeMillis())); // What if the Callables.returning(timeConsumingService.get("KEY_1")) ? // Main thread still have to wait for the returning of this service. System.out.println(System.currentTimeMillis()); System.out.println(value); } // output: // 1409031825841 // 1409031825842 // 1409031825869 // VALUE_1409031825842
2) invalidate & invalidateAll
@Test public void invalidateTest() { Cache<String, String> cache = CacheBuilder.newBuilder().build(); cache.put("KEY_1", "VALUE_1"); cache.put("KEY_2", "VALUE_2"); cache.put("KEY_3", "VALUE_3"); cache.put("KEY_4", "VALUE_4"); String value = cache.getIfPresent("KEY_1"); assertEquals("VALUE_1", value); cache.invalidate("KEY_1"); value = cache.getIfPresent("KEY_1"); assertNull(value); cache.invalidateAll(Lists.newArrayList("KEY_2", "KEY_3")); value = cache.getIfPresent("KEY_2"); assertNull(value); value = cache.getIfPresent("KEY_3"); assertNull(value); value = cache.getIfPresent("KEY_4"); assertEquals("VALUE_4", value); cache.invalidateAll(); value = cache.getIfPresent("KEY_4"); assertNull(value); cache.invalidate("KEY_N"); }
2. LoadingCache
public interface LoadingCache<K, V> extends Cache<K, V>, Function<K, V> { V get(K key) throws ExecutionException; V getUnchecked(K key); ImmutableMap<K, V> getAll(Iterable<? extends K> keys) throws ExecutionException; void refresh(K key); ConcurrentMap<K, V> asMap(); }
The LoadingCache interface extends the Cache interface with the self-loading functionality.
Consider the following code:
Book book = loadingCache.get(id);
If the book object was not available when the get call was executed, LoadingCache will know how to retrieve the object, store it in the cache, and return the value.
As implementations of LoadingCache are expected to be thread safe, a call made to get(), with the same key, while the cache is loading would block. Once the value was loaded, the call would return the value that was loaded by the orginal call to the get() method.
However, multiple calls to get with distinct keys would load concurrently.
static LoadingCache<String, String> cache = CacheBuilder.newBuilder() .build(new CacheLoader<String, String>() { private int i = 1; @Override public String load(String key) throws Exception { Thread.sleep(1000); return "DUMMY_VALUE" + (++i); } }); @Test public void syncLoadingTest() throws ExecutionException { System.out.println(System.currentTimeMillis()); String value = cache.get("DUMMY_KEY"); // Blocking 1000ms for loading System.out.println(System.currentTimeMillis()); System.out.println("Finished syncLoadingTest, value: " + value); } // output: We can see, the loading process cost 1000ms. // 1409046809839 // 1409046810850 // Finished syncLoadingTest, value: DUMMY_VALUE2 @SuppressWarnings("unchecked") @Test public void asyncReadingTest() throws ExecutionException, InterruptedException { Callable<String> readThread1 = new Callable<String>() { @Override public String call() throws Exception { return cache.get("DUMMY_KEY_1"); } }; Callable<String> readThread2 = new Callable<String>() { @Override public String call() throws Exception { return cache.get("DUMMY_KEY_2"); } }; Callable<String> readThread3 = new Callable<String>() { @Override public String call() throws Exception { return cache.get("DUMMY_KEY_3"); } }; System.out.println("Before invokeAll: " + System.currentTimeMillis()); Executors.newFixedThreadPool(3).invokeAll( Lists.newArrayList(readThread1, readThread2, readThread3)); System.out.println("After invokeAll: " + System.currentTimeMillis()); System.out.println("Before get: " + System.currentTimeMillis()); String value1 = cache.get("DUMMY_KEY_1"); String value2 = cache.get("DUMMY_KEY_2"); String value3 = cache.get("DUMMY_KEY_3"); System.out.println("After get: " + System.currentTimeMillis() + ".\nvalue1: " + value1 + ", value2: " + value2 + ", value3:" + value3); } // output: We can see, for all 3 values, the loading process only cost 1000ms. // Before invokeAll: 1409046901047 // After invokeAll: 1409046902116 // Before get: 1409046902116 // After get: 1409046902116. // value1: DUMMY_VALUE2, value2: DUMMY_VALUE2, value3:DUMMY_VALUE3
If we have a collection of keys and would like to retrieve the values for each key, we will make the following call:
ImmutableMap<K, V> map = cache.getAll(Iterable<? extends K> keys);
The map returned from getAll could either be all cached values, all newly retrieved values, or a mix of already cached and newly retrieved values.
Q: The process of loading for uncached value is sync of async?
A: Sync:
static LoadingCache<String, String> cache = CacheBuilder.newBuilder() .build(new CacheLoader<String, String>() { private int i = 1; @Override public String load(String key) throws Exception { Thread.sleep(1000); return "DUMMY_VALUE" + (++i); } }); @Test public void getAllTest() throws ExecutionException { System.out.println("Before getAllTest: " + System.currentTimeMillis()); cache.getAll(Lists.newArrayList("KEY_1", "KEY_2", " KEY_3")); System.out.println("After getAllTest: " + System.currentTimeMillis()); } // output: We can find the total time consumption is 3000ms // Before getAllTest: 1409049717214 // After getAllTest: 1409049720343
LoadingCache also provides a mechanism for refreshing values in the cache:
void refresh(K key);
By making a call to refresh, LoadingCache will retrieve a new value for the key. The current value will not be discarded until the new value has been returned; this means that the calls to get during the loading process will return the current value in the cache. If an exception is thrown during the refresh call, the original value is kept in the cache. Kepp in mind that if the value is retrieved asynchronously, the method could return before the value is actually refreshed.
static LoadingCache<String, String> cache = CacheBuilder.newBuilder() .build(new CacheLoader<String, String>() { private int i = 1; @Override public String load(String key) throws Exception { Thread.sleep(1000); return "DUMMY_VALUE" + (++i); } }); /** * Test for refresh() in loading cache <br/> * The calls to get() during the loading process will return the current * value in the cache <br/> * * @param args */ public static void main(String[] args) { cache.put("DUMMY_KEY1", "DUMMY_VALUE1"); Thread refreshThread = new Thread(new Runnable() { @Override public void run() { try { while (true) { Thread.sleep(1000); System.out.println("Start refresh KEY: DUMMY_KEY1"); cache.refresh("DUMMY_KEY1"); System.out.println("Finished refresh KEY: DUMMY_KEY1"); } } catch (InterruptedException e) { e.printStackTrace(); } } }); Thread getThread = new Thread(new Runnable() { @Override public void run() { try { while (true) { Thread.sleep(500); System.out.println("Start get KEY: DUMMY_KEY1"); String value = cache.get("DUMMY_KEY1"); System.out .println("Finished get KEY: DUMMY_KEY1, VALUE: " + value); } } catch (ExecutionException e) { e.printStackTrace(); } catch (InterruptedException e) { e.printStackTrace(); } } }); refreshThread.start(); getThread.start(); } // output: We can find that during the course of refresh, we will still get the legacy value. // Start get KEY: DUMMY_KEY1 // Finished get KEY: DUMMY_KEY1, VALUE: DUMMY_VALUE1 // Start get KEY: DUMMY_KEY1 // Start refresh KEY: DUMMY_KEY1 // Finished get KEY: DUMMY_KEY1, VALUE: DUMMY_VALUE1 // Start get KEY: DUMMY_KEY1 // Finished get KEY: DUMMY_KEY1, VALUE: DUMMY_VALUE1 // Start get KEY: DUMMY_KEY1 // Finished get KEY: DUMMY_KEY1, VALUE: DUMMY_VALUE1 // Finished refresh KEY: DUMMY_KEY1 // Start get KEY: DUMMY_KEY1 // Finished get KEY: DUMMY_KEY1, VALUE: DUMMY_VALUE2 // Start get KEY: DUMMY_KEY1 // Finished get KEY: DUMMY_KEY1, VALUE: DUMMY_VALUE2 // Start refresh KEY: DUMMY_KEY1 // Start get KEY: DUMMY_KEY1 // Finished get KEY: DUMMY_KEY1, VALUE: DUMMY_VALUE2
3> CacheBuilder
The CacheBuilder class provides a way to obtain Cache and LoadingCache instances via the Builder pattern. There are many options we can specify on the Cache instance we are creating rather than listing all of them.
Eg1:
package edu.xmu.guava.cache; import java.util.concurrent.TimeUnit; import org.junit.Test; import com.google.common.cache.CacheBuilder; import com.google.common.cache.CacheLoader; import com.google.common.cache.LoadingCache; import com.google.common.cache.RemovalListener; import com.google.common.cache.RemovalNotification; public class CacheBuilderTest { @Test public void buildCacheTest() throws Exception { LoadingCache<String, String> cache = CacheBuilder.newBuilder() .expireAfterWrite(2, TimeUnit.SECONDS) .ticker(Ticker.systemTicker()) .removalListener(new RemovalListener<String, String>() { @Override public void onRemoval( RemovalNotification<String, String> notification) { System.out.println(String.format( "[%s] is removed from cache", notification)); } }).build(new CacheLoader<String, String>() { @Override public String load(String key) throws Exception { return key + System.currentTimeMillis(); } }); String value = cache.get("Hello"); System.out.println(value); value = cache.get("Hello"); System.out.println(value); Thread.sleep(2100); System.out.println(cache.size()); Thread.sleep(1100); value = cache.get("Hello"); System.out.println(value); } } // output: The new value is created after 3100ms instead of 2000ms, and when we get size at 2100ms, the size is 1 instead of 0. // Hello1409057402516 // Hello1409057402516 // 1 // [Hello=Hello1409057402516] is removed from cache // Hello1409057405725expireAfterWrite: When duration is zero, this method hands off to maximumSize(0), ignoring any otherwise-specificed maximum size or weight. This can be useful in testing, or to disable caching temporarily without a code change. Actually this expireAfterWrite will not remove this entry automatically when TimeUnit expires, it will remove entry when we visit it again and (CurrentTimestamp-LastAccessedTimestamp > TimeUnit).
ticker: Specifies a nanosecond-precision time source for use in determining when entries should be expired. By default, System.nanoTime is used. The primary intent of this method is to facilitate testing of caches which have been configured with expireAfterWrite or expireAfterAccess.
Eg2:
@Test public void maxSizeTest() throws ExecutionException { LoadingCache<String, String> cache = CacheBuilder.newBuilder() .maximumSize(3L) .removalListener(new RemovalListener<String, String>() { @Override public void onRemoval( RemovalNotification<String, String> notification) { System.out.println(String.format( "[%s] is removed from cache", notification)); } }).build(new CacheLoader<String, String>() { @Override public String load(String key) throws Exception { return key + "_" + System.currentTimeMillis(); } }); for (int i = 0; i < 12; i++) { System.out.println(cache.get(String.valueOf(i % 5))); } } // output: 0_1409058469045 1_1409058469046 2_1409058469046 [0=0_1409058469045] is removed from cache 3_1409058469046 [1=1_1409058469046] is removed from cache 4_1409058469055 [2=2_1409058469046] is removed from cache 0_1409058469055 [3=3_1409058469046] is removed from cache 1_1409058469055 [4=4_1409058469055] is removed from cache 2_1409058469055 [0=0_1409058469055] is removed from cache 3_1409058469055 [1=1_1409058469055] is removed from cache 4_1409058469055 [2=2_1409058469055] is removed from cache 0_1409058469056 [3=3_1409058469055] is removed from cache 1_1409058469056maximumSize: Less recently Used(LRU) entries are subject to be removed as the size of the cache approaches the maximum size number, not necessarily when the maxmium size is met or exceeded. That means if we set maximumSize = 100, there might be an occasion that some entries are removed when the size of the cache is 98 or even smaller. When size is zero, elements will be evicted immediately after being loaded into the cache. This can be useful in testing, or to disable caching temporarily without a code change. This feature cannot be used in conjunction with maxmiumWeight
相关推荐
免费JAVA毕业设计 2024成品源码+论文+数据库+启动教程 启动教程:https://www.bilibili.com/video/BV1SzbFe7EGZ 项目讲解视频:https://www.bilibili.com/video/BV1Tb421n72S 二次开发教程:https://www.bilibili.com/video/BV18i421i7Dx
,IGBT结温估算 模型见另一个发布
"S7-200 PLC驱动的智能粮仓系统:带解释的接线图与组态画面原理详解",S7-200 mcgs基于plc的自动智能粮仓系统 带解释的梯形图接线图原理图图纸,io分配,组态画面 ,S7-200; PLC; 自动智能粮仓系统; 梯形图接线图; 原理图图纸; IO分配; 组态画面,基于S7-200 PLC的智能粮仓系统设计与实现
手机编程-1738391379497.jpg
,rk3399pro,rk3568,车载方案设计,4路AHD-1080P摄像头输入,防撞识别,助力车泥头车安全运输
,CAD、DXF导图,自动进行位置路径规划,源码可进行简单功能添加实现设备所需功能,已经在冲孔机,点胶机上应用,性价比超高。 打孔机实测一分钟1400个孔
,电机控制资料-- 注:本驱动器适合于直流有感无刷电机 功能特点 支持电压9V~36V,额定输出电流5A 支持电位器、开关、0~3.3V模拟信号范围、0 3.3 5 24V逻辑电平、PWM 频率 脉冲信号、RS485多种输入信号 支持占空比调速(调压)、速度闭环控制(稳速)、电流控制(稳流)多种调速方式 支持按键控制正反转速度,启停 特色功能 1. 霍尔自学习 电机的三相线和三霍尔信号线可不按顺序连接,驱动器可自动对电机霍尔顺序进行学习。 2. 稳速控制响应时间短 稳速控制时电机由正转2000RPM切为反转2000RPM,用时约1.0s,电机切过程平稳 3. 极低速稳速控制 电机进行极低速稳速控制,电机稳速控制均匀,无忽快忽慢现象。
《HFSS同轴馈电矩形微带天线的模型制作与参数优化:从结果中学习,使用HFSS软件包进行实践的详细教程》,HFSS同轴馈电矩形微带天线 天线模型,附带结果,可改参数,HFSS软件包 (有教程,具体到每一步,可以自己做出来) ,HFSS; 同轴馈电; 矩形微带天线; 可改参数; HFSS软件包; 附带结果; 教程,HFSS软件包:可改参微带天线模型附带结果教程
"基于第二篇文章求解方法,改进粒子群算法在微电网综合能源优化调度的应用与复现代码展示——第一篇模型的参考与实践",基于改进粒子群算法微电网综合能源优化调度 求解方法主要参考第二篇文章 模型参照第一篇 复现代码 ,核心关键词: 基于改进粒子群算法; 微电网综合能源优化调度; 求解方法; 第二篇文章; 模型; 第一篇文章; 复现代码;,基于第二篇求解方法的改进粒子群算法在微电网综合能源优化调度中的应用研究
基于Comsol模拟的三层顶板随机裂隙浆液扩散模型:考虑重力影响的瞬态扩散规律分析,Comsol模拟,考虑三层顶板包含随机裂隙的浆液扩散模型,考虑浆液重力的影响,模型采用的DFN插件建立随机裂隙,采用达西定律模块中的储水模型为控制方程,分析不同注浆压力条件下的浆液扩散规律,建立瞬态模型 ,Comsol模拟; 随机裂隙浆液扩散模型; 浆液重力影响; DFN插件; 达西定律模块储水模型; 注浆压力条件; 浆液扩散规律; 瞬态模型,Comsol浆液扩散模型:随机裂隙下考虑重力的瞬态扩散分析
"基于S7-200 PLC与MCGS组态的五层电梯控制系统设计与实现:带详细接线图、IO分配及组态画面解析",S7-200 PLC和MCGS组态5层电梯五层电梯PLC控制系统 带解释的梯形图接线图原理图图纸,io分配,组态画面 ,核心关键词:S7-200 PLC; MCGS组态; 五层电梯; PLC控制系统; 梯形图接线图; IO分配; 组态画面。,S7-200 PLC与MCGS组态五层电梯控制系统原理图及梯形图解析
一、项目简介 本项目是一套基于springBoot+mybatis+maven+vue夕阳红公寓管理系统 包含:项目源码、数据库脚本等,该项目附带全部源码可作为毕设使用。 项目都经过严格调试,eclipse或者idea 确保可以运行! 该系统功能完善、界面美观、操作简单、功能齐全、管理便捷,具有很高的实际应用价值 二、技术实现 jdk版本:1.8 及以上 ide工具:IDEA或者eclipse 数据库: mysql5.5及以上 后端:spring+springboot+mybatis+maven+mysql 前端: vue , css,js , elementui 三、系统功能 1、系统角色主要包括:管理员、用户 2、系统功能 主要功能包括: 用户登录注册 首页 个人中心 修改密码 个人信息 访客管理 公告信息管理 缴费管理 维修管理 行程轨迹管理 单页号类型管理 公告类型管理 维修类型管理 租客管理 轮播图管理 余额充值等功能 详见 https://flypeppa.blog.csdn.net/article/details/143117373
基于时空Transformer的端到端的视频注视目标检测.pdf
Online Retail.xlsx
,C#地磅称重无人值守管理软件。 软件实现功能: 1、身份证信息读取。 2、人证识别。 3、车牌识别(臻识摄像头、海康摄像头)。 4、LED显示屏文字输出。 5、称重仪数据。 6、二维码扫码。 7、语音播报。 8、红外对射功能。 9、道闸控制。
com.deepseek.chat.apk
基于pyqt5+OpenPose的太极拳姿态识别系统可视化界面python源码+数据集.zip,个人大三大作业设计项目、经导师指导并认可通过的高分设计项目,评审分99分,代码完整确保可以运行,小白也可以亲自搞定,主要针对计算机相关专业的正在做毕设的学生和需要项目实战练习的学习者,也可作为课程设计、期末大作业。 该压缩包是一个基于PyQt5和OpenPose技术的太极拳姿态识别系统的源代码和相关资源集合。系统能够实现对太极拳动作的实时姿态识别,并通过可视化界面展示出来,为学习和教学太极拳提供便利。 二、技术栈与组件 PyQt5:一个Python绑定的Qt库,用于创建图形用户界面(GUI)应用程序。它提供了丰富的组件和工具,可以方便地构建各种复杂界面,如按钮、文本框、图像视图等,同时也支持事件驱动编程,使得用户交互更加灵活。 OpenPose:一个来自卡内基梅隆大学(CMU)的开源库,主要用于人体、面部、手部以及脚部的关键点检测。它采用了深度学习的方法,能够在单张图片上实时估计多人的关节位置,对于运动分析、姿态识别等领域非常有用。
1、文件内容:pygtk2-devel-2.24.0-9.el7.rpm以及相关依赖 2、文件形式:tar.gz压缩包 3、安装指令: #Step1、解压 tar -zxvf /mnt/data/output/pygtk2-devel-2.24.0-9.el7.tar.gz #Step2、进入解压后的目录,执行安装 sudo rpm -ivh *.rpm 4、安装指导:私信博主,全程指导安装
"金纳米超表面模型:几何相位控制下的涡旋光生成与FDTD仿真研究",几何相位 金属超表面模型 涡旋光生成 FDTD仿真 复现lunwen:2012年Nano Letters:Dispersionless Phase Discontinuities for Controlling Light Propagation lunwen介绍:金纳米结构超表面模型,金属材料矩形结构,通过旋转角度执行几何相位,构建异常折反射超表面模型,通过涡旋相位匹配几何相位,构建生产轨道角动量的涡旋光场超表面; 案例内容:主要包括金纳米柱的单元结构仿真、几何相位计算,涡旋光的螺旋相位计算代码,以及异常折反射的超表面模型和轨道角动量光束生成的超表面模型; 案例包括fdtd模型、fdtd建模脚本、Matlab相位计算代码和电场复现结果,以及一份word教程,异常折反射和涡旋光相位的构建代码可用于任意波段,具备可拓展性。 ,核心关键词: 1. 几何相位 2. 金属超表面模型 3. 涡旋光生成 4. FDTD仿真 5. 复现论文 6. 金纳米结构 7. 异常折反射超表面模型 8. 轨道角动量光束 9. 单元结构仿
comso三维声表面波诱导液滴行为研究:液滴拉伸断裂过程的可视化及分析,包含液滴最高坐标、底面接触面积、空气接触面积与能量项研究。,comso三维声表面波作用液滴,液滴拉伸断裂形成液滴,结果图包含液滴最高坐标,液滴与底面接触面积,与空气接触面积,以及能量项 ,关键词:comso三维声表面波;液滴拉伸断裂;最高坐标;接触面积(底面/空气);能量项;结果图。,声波作用下液滴断裂,图示液滴信息及能量项分析