阅读caffeine文档
This commit is contained in:
203
spring/caffeine/caffeine.md
Normal file
203
spring/caffeine/caffeine.md
Normal file
@@ -0,0 +1,203 @@
|
||||
# caffeine
|
||||
caffeine是一个高性能的java缓存库,其几乎能够提供最佳的命中率。
|
||||
cache类似于ConcurrentMap,但并不完全相同。在ConcurrentMap中,会维护所有添加到其中的元素,直到元素被显式移除;而Cache则是可以通过配置来自动的淘汰元素,从而限制cache的内存占用。
|
||||
## Cache
|
||||
### 注入
|
||||
Cache提供了如下的注入策略
|
||||
#### 手动
|
||||
```java
|
||||
Cache<Key, Graph> cache = Caffeine.newBuilder()
|
||||
.expireAfterWrite(10, TimeUnit.MINUTES)
|
||||
.maximumSize(10_000)
|
||||
.build();
|
||||
|
||||
// Lookup an entry, or null if not found
|
||||
Graph graph = cache.getIfPresent(key);
|
||||
// Lookup and compute an entry if absent, or null if not computable
|
||||
graph = cache.get(key, k -> createExpensiveGraph(key));
|
||||
// Insert or update an entry
|
||||
cache.put(key, graph);
|
||||
// Remove an entry
|
||||
cache.invalidate(key);
|
||||
```
|
||||
Cache接口可以显示的操作缓存条目的获取、失效、更新。
|
||||
条目可以直接通过cache.put(key,value)来插入到缓存中,该操作会覆盖已存在的key对应的条目。
|
||||
也可以使用cache.get(key,k->value)的形式来对缓存进行插入,该方法会在缓存中查找key对应的条目,如果不存在,会调用k->value来进行计算并将计算后的将计算后的结果插入到缓存中。该操作是原子的。如果该条目不可计算,会返回null,如果计算过程中发生异常,则是会抛出异常。
|
||||
除上述方法外,也可以通过cache.asMap()返回map对象,并且调用ConcurrentMap中的接口来对缓存条目进行修改。
|
||||
#### Loading
|
||||
```java
|
||||
// build方法可以指定一个CacheLoader参数
|
||||
LoadingCache<Key, Graph> cache = Caffeine.newBuilder()
|
||||
.maximumSize(10_000)
|
||||
.expireAfterWrite(10, TimeUnit.MINUTES)
|
||||
.build(key -> createExpensiveGraph(key));
|
||||
|
||||
// Lookup and compute an entry if absent, or null if not computable
|
||||
Graph graph = cache.get(key);
|
||||
// Lookup and compute entries that are absent
|
||||
Map<Key, Graph> graphs = cache.getAll(keys);
|
||||
```
|
||||
LoadingCache和CacheLoader相关联。
|
||||
可以通过getAll方法来执行批量查找,默认情况下,getAll方法会为每个cache中不存在的key向CacheLoader.load发送一个请求。当批量查找比许多单独的查找效率更加高时,可以重写CacheLoader.loadAll方法。
|
||||
#### 异步(手动)
|
||||
```java
|
||||
AsyncCache<Key, Graph> cache = Caffeine.newBuilder()
|
||||
.expireAfterWrite(10, TimeUnit.MINUTES)
|
||||
.maximumSize(10_000)
|
||||
.buildAsync();
|
||||
|
||||
// Lookup an entry, or null if not found
|
||||
CompletableFuture<Graph> graph = cache.getIfPresent(key);
|
||||
// Lookup and asynchronously compute an entry if absent
|
||||
graph = cache.get(key, k -> createExpensiveGraph(key));
|
||||
// Insert or update an entry
|
||||
cache.put(key, graph);
|
||||
// Remove an entry
|
||||
cache.synchronous().invalidate(key);
|
||||
```
|
||||
AsyncCache允许异步的计算条目,并且返回CompletableFuture。
|
||||
AsyncCache可以调用synchronous方法来提供同步的视图。
|
||||
默认情况下executor是ForkJoinPool.commonPool(),可以通过Caffeine.executor(threadPool)来进行覆盖。
|
||||
#### Async Loading
|
||||
```java
|
||||
AsyncLoadingCache<Key, Graph> cache = Caffeine.newBuilder()
|
||||
.maximumSize(10_000)
|
||||
.expireAfterWrite(10, TimeUnit.MINUTES)
|
||||
// Either: Build with a synchronous computation that is wrapped as asynchronous
|
||||
.buildAsync(key -> createExpensiveGraph(key));
|
||||
// Or: Build with a asynchronous computation that returns a future
|
||||
.buildAsync((key, executor) -> createExpensiveGraphAsync(key, executor));
|
||||
|
||||
// Lookup and asynchronously compute an entry if absent
|
||||
CompletableFuture<Graph> graph = cache.get(key);
|
||||
// Lookup and asynchronously compute entries that are absent
|
||||
CompletableFuture<Map<Key, Graph>> graphs = cache.getAll(keys);
|
||||
```
|
||||
AsyncLoadingCache是一个AsyncCache加上一个AsyncCacheLoader。
|
||||
同样地,AsyncCacheLoader支持重写load和loadAll方法。
|
||||
### 淘汰
|
||||
Caffeine提供了三种类型的淘汰:基于size的,基于时间的,基于引用的。
|
||||
#### 基于时间的
|
||||
```java
|
||||
// Evict based on the number of entries in the cache
|
||||
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
|
||||
.maximumSize(10_000)
|
||||
.build(key -> createExpensiveGraph(key));
|
||||
|
||||
// Evict based on the number of vertices in the cache
|
||||
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
|
||||
.maximumWeight(10_000)
|
||||
.weigher((Key key, Graph graph) -> graph.vertices().size())
|
||||
.build(key -> createExpensiveGraph(key));
|
||||
```
|
||||
如果你的缓存不应该超过特定的容量限制,应该使用`Caffeine.maximumSize(long)`方法。该缓存会对不常用的条目进行淘汰。
|
||||
如果每条记录的权重不同,那么可以通过`Caffeine.weigher(Weigher)`指定一个权重计算方法,并且通过`Caffeine.maximumWeight(long)`指定缓存最大的权重值。
|
||||
#### 基于时间的淘汰策略
|
||||
```java
|
||||
// Evict based on a fixed expiration policy
|
||||
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
|
||||
.expireAfterAccess(5, TimeUnit.MINUTES)
|
||||
.build(key -> createExpensiveGraph(key));
|
||||
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
|
||||
.expireAfterWrite(10, TimeUnit.MINUTES)
|
||||
.build(key -> createExpensiveGraph(key));
|
||||
|
||||
// Evict based on a varying expiration policy
|
||||
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
|
||||
.expireAfter(new Expiry<Key, Graph>() {
|
||||
public long expireAfterCreate(Key key, Graph graph, long currentTime) {
|
||||
// Use wall clock time, rather than nanotime, if from an external resource
|
||||
long seconds = graph.creationDate().plusHours(5)
|
||||
.minus(System.currentTimeMillis(), MILLIS)
|
||||
.toEpochSecond();
|
||||
return TimeUnit.SECONDS.toNanos(seconds);
|
||||
}
|
||||
public long expireAfterUpdate(Key key, Graph graph,
|
||||
long currentTime, long currentDuration) {
|
||||
return currentDuration;
|
||||
}
|
||||
public long expireAfterRead(Key key, Graph graph,
|
||||
long currentTime, long currentDuration) {
|
||||
return currentDuration;
|
||||
}
|
||||
})
|
||||
.build(key -> createExpensiveGraph(key));
|
||||
```
|
||||
caffine提供三种方法来进行基于时间的淘汰:
|
||||
- expireAfterAccess(long, TimeUnit):基于上次读写操作过后的时间来进行淘汰
|
||||
- expireAfterWrite(long, TimeUnit):基于创建时间、或上次写操作执行的时间来进行淘汰
|
||||
- expireAfter(Expire):基于自定义的策略来进行淘汰
|
||||
过期在写操作之间周期性的进行触发,偶尔也会在读操作之间进行出发。调度和发送过期事件都是在o(1)时间之内完成的。
|
||||
为了及时过期,而不是通过缓存活动来触发过期,可以通过`Caffeine.scheuler(scheduler)`来指定调度线程
|
||||
#### 基于引用的淘汰策略
|
||||
```java
|
||||
// Evict when neither the key nor value are strongly reachable
|
||||
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
|
||||
.weakKeys()
|
||||
.weakValues()
|
||||
.build(key -> createExpensiveGraph(key));
|
||||
|
||||
// Evict when the garbage collector needs to free memory
|
||||
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
|
||||
.softValues()
|
||||
.build(key -> createExpensiveGraph(key));
|
||||
```
|
||||
caffeine允许设置cache支持垃圾回收,通过使用为key指定weak reference,为value制定soft reference
|
||||
|
||||
### 移除
|
||||
可以通过下述方法显式移除条目:
|
||||
```java
|
||||
// individual key
|
||||
cache.invalidate(key)
|
||||
// bulk keys
|
||||
cache.invalidateAll(keys)
|
||||
// all keys
|
||||
cache.invalidateAll()
|
||||
```
|
||||
#### removal listener
|
||||
```java
|
||||
Cache<Key, Graph> graphs = Caffeine.newBuilder()
|
||||
.evictionListener((Key key, Graph graph, RemovalCause cause) ->
|
||||
System.out.printf("Key %s was evicted (%s)%n", key, cause))
|
||||
.removalListener((Key key, Graph graph, RemovalCause cause) ->
|
||||
System.out.printf("Key %s was removed (%s)%n", key, cause))
|
||||
.build();
|
||||
```
|
||||
在entry被移除时,可以指定listener来执行一系列操作,通过`Caffeine.removalListener(RemovalListener)`。操作是通过Executor异步执行的。
|
||||
当想要在缓存失效之后同步执行操作时,可以使用`Caffeine.evictionListener(RemovalListener)`.该监听器将会在`RemovalCause.wasEvicted()`时被触发
|
||||
### compute
|
||||
通过compute,caffeine可以在entry创建、淘汰、更新时,原子的执行一系列操作:
|
||||
```java
|
||||
Cache<Key, Graph> graphs = Caffeine.newBuilder()
|
||||
.evictionListener((Key key, Graph graph, RemovalCause cause) -> {
|
||||
// atomically intercept the entry's eviction
|
||||
}).build();
|
||||
|
||||
graphs.asMap().compute(key, (k, v) -> {
|
||||
Graph graph = createExpensiveGraph(key);
|
||||
... // update a secondary store
|
||||
return graph;
|
||||
});
|
||||
```
|
||||
### 统计
|
||||
通过`Caffeine.recordStats()`方法,可以启用统计信息的收集,`cache.stats()`方法将返回一个CacheStats对象,提供如下接口:
|
||||
- hitRate():返回请求命中率
|
||||
- evictionCount():cache淘汰次数
|
||||
- averageLoadPenalty():load新值花费的平均时间
|
||||
```java
|
||||
Cache<Key, Graph> graphs = Caffeine.newBuilder()
|
||||
.maximumSize(10_000)
|
||||
.recordStats()
|
||||
.build();
|
||||
```
|
||||
### cleanup
|
||||
默认情况下,Caffine并不会在自动淘汰entry后或entry失效之后立即进行清理,而是在写操作之后执行少量的清理工作,如果写操作很少,则是偶尔在读操作后执行少量读操作。
|
||||
如果你的缓存是高吞吐量的,那么不必担心过期缓存的清理,如果你的缓存读写操作都比较少,那么需要新建一个外部线程来调用`Cache.cleanUp()`来进行缓存清理。
|
||||
```java
|
||||
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
|
||||
.scheduler(Scheduler.systemScheduler())
|
||||
.expireAfterWrite(10, TimeUnit.MINUTES)
|
||||
.build(key -> createExpensiveGraph(key));
|
||||
```
|
||||
scheduler可以用于及时的清理过期缓存,
|
||||
|
||||
Reference in New Issue
Block a user