`

Cache Cohernce with Multi-Processor

阅读更多
刚写完一篇关于Cache Coherence的文章,就发现BNN2年前就有一篇好文,早知道就不这么费事自己写了:)

Recently work with dual cpu kernel part. For dual cpu, or we say, multi-processor, the big challenge part for a kernel is how to handle the cache coherence.

Conceptually, two choices--Write Invalidate and Write Update.

We will talk about Write Invalidate today.

Typically, there are two protocols falling into the Write Invalidate protocol, namely, the Write-Through Write Invalidate Protocol and the Write Once(Or Write-Back Write Invalidate Protocol). Note that the well-known MESI protocol is derived from the Write Once. That's why we will focus on the Write Once here.
--------------------------
Write Once:

Write Once protocol is to offset the shortcomings of Write-Through Write-Invalidate Protocol, which will introduce extra bus traffic onto the system bus.

Write Once works basically as follows:

(Assume
* the hardware snoop enabled over the shared system bus.
* The cache is Write Back
)
There are four states for Write Once protocol--Valid, Reserved, Dirty and Invalid.

Initial State:

* When a LOAD MISS, the data will be loaded into cache line and the state goes to VALID state

// Please note here, Write Once protocol will promise that your loaded data in memory will be the latest. Why? If at the time you tried to load a cache line, there is an modified copy in another CPU, the snoop protocol will abort the load bus transaction; flush another cpu's data into main memory and then resume the aborted transaction so that the requesting CPU will then get the updated data....

Now, let's investigate the state machine of Write Once Protocol.

***************
VALID State:
***************

When a LOAD HIT, we do nothing. That's right. The cache line is already here. CPU is happy to find the data in the cache.

When a LOAD MISS, we will re-start the init procesure to load the latest data into cache line.

When a CPU STORE HIT(a store hit from the current processor) , Now comes to the key part of Write Once protocol. When having a write/store behavior, for UP(unique processor) system, we all understand that the cache state will go to DIRTY state and ****didn't write data back to main memory****. However, Write-Once protocol works like this below in order to achieve multiple processor cache coherence.

The stored data will be flushed back to main memory(why? We need a bus transaction over the bus!!!) and then cache state will be moved to Reserved State.

This is exactly why this protocol is given the name of "Write Once"---Write the first time write access to a write-back cache line into main memory*****!!!! so that other processor cache controller will be awared and then invalidate the corresponding cache lines, and thus the whole system will only one copy of the cache line.

After the first time write once, the subsequent write access will only change the state to DIRTY state and the data will stay in cache line and will not be flushed into main memory, the same case as we see in UP write-back approach.

When a SNOOP STOEE HIT( we found another CPU is trying to do a store on that cached address), then, with the write-invalidate semantics, we know, the system will then invalidate its own copy in this current processor in order to keep only one legal copy for that cache line. In other words, the state will go to Invalid state from Valid state. Note that, we don't have to do any flush. The reason is simple: In this processor, we didn't do any write yet. So we will only invalidate our own copy in this processor. In the later, if we want to read this particular data, we will have to load it from main memory again.

For VALID state, we have other input needed to be considered, like

snoop load hit
cpu store miss

How cpu will react with these two actions?
I will leave questions to you guys......

Recently work with dual cpu kernel part. For dual cpu, or we say, multi-processor, the big challenge part for a kernel is how to handle the cache coherence.

Conceptually, two choices--Write Invalidate and Write Update.

We will talk about Write Invalidate today.

Typically, there are two protocols falling into the Write Invalidate protocol, namely, the Write-Through Write Invalidate Protocol and the Write Once(Or Write-Back Write Invalidate Protocol). Note that the well-known MESI protocol is derived from the Write Once. That's why we will focus on the Write Once here.
--------------------------
Write Once:

Write Once protocol is to offset the shortcomings of Write-Through Write-Invalidate Protocol, which will introduce extra bus traffic onto the system bus.

Write Once works basically as follows:

(Assume
* the hardware snoop enabled over the shared system bus.
* The cache is Write Back
)
There are four states for Write Once protocol--Valid, Reserved, Dirty and Invalid.

Initial State:

* When a LOAD MISS, the data will be loaded into cache line and the state goes to VALID state

// Please note here, Write Once protocol will promise that your loaded data in memory will be the latest. Why? If at the time you tried to load a cache line, there is an modified copy in another CPU, the snoop protocol will abort the load bus transaction; flush another cpu's data into main memory and then resume the aborted transaction so that the requesting CPU will then get the updated data....

Now, let's investigate the state machine of Write Once Protocol.

***************
VALID State:
***************

When a LOAD HIT, we do nothing. That's right. The cache line is already here. CPU is happy to find the data in the cache.

When a LOAD MISS, we will re-start the init procesure to load the latest data into cache line.

When a CPU STORE HIT(a store hit from the current processor) , Now comes to the key part of Write Once protocol. When having a write/store behavior, for UP(unique processor) system, we all understand that the cache state will go to DIRTY state and ****didn't write data back to main memory****. However, Write-Once protocol works like this below in order to achieve multiple processor cache coherence.

The stored data will be flushed back to main memory(why? We need a bus transaction over the bus!!!) and then cache state will be moved to Reserved State.

This is exactly why this protocol is given the name of "Write Once"---Write the first time write access to a write-back cache line into main memory*****!!!! so that other processor cache controller will be awared and then invalidate the corresponding cache lines, and thus the whole system will only one copy of the cache line.

After the first time write once, the subsequent write access will only change the state to DIRTY state and the data will stay in cache line and will not be flushed into main memory, the same case as we see in UP write-back approach.

When a SNOOP STOEE HIT( we found another CPU is trying to do a store on that cached address), then, with the write-invalidate semantics, we know, the system will then invalidate its own copy in this current processor in order to keep only one legal copy for that cache line. In other words, the state will go to Invalid state from Valid state. Note that, we don't have to do any flush. The reason is simple: In this processor, we didn't do any write yet. So we will only invalidate our own copy in this processor. In the later, if we want to read this particular data, we will have to load it from main memory again.

For VALID state, we have other input needed to be considered, like

snoop load hit
cpu store miss

How cpu will react with these two actions?
I will leave questions to you guys......


----------------
Valid State
----------------

SNOOP LOAD HIT:
That means that current CPU cache controller ***see** a bus transaction issued by another device, for instance, another CPU, to access the data that is currentily cached. When this happens, current CPU cache will do nothing but stay the VALID state. The reason is simple: Another device/cpu will fetch the data from the memory that STILL holds the latest data.

CPU STORE MISS:

How/when this situation could happen, after a cache line WAS loaded before? Yeah, you are right. That cache line could be **replaced** by other datas(Remember the LRU algorithm of cache management; Set associative concepts/mechanisms and so such).

when a CPU store miss happens, the data will be written into main memory and also have a copy in the cache (That's the "Write Allocate" when handling a write miss). After that, the cache snoop protocol will move its state into Reserved from the VALID. Why? That means that a write and more importantly, the first write, has just completed!

--------------
Reserved
--------------

As I said before, the Reserved state was introduced to reflect the write once semantics--The first write will flush data into memory so that other devices/CPUs are able to see/be notified a bus transaction!

For this state, same as VALID state, its input could vary as follows:

* CPU LOAD HIT
Simply feed the data back to CPU and keep the state stay.

* CPU WRITE HIT
Since we are using WT(write back) for the cache lines, then we will change our state to DIRTY. In other words, we will not flush data into memory, in contrast with the WT(Write Through) approach.

* CPU Load Miss

System will simply load the data from main memory and keep a copy in the cache(if it is Not-Read Through). And reset the system as VALID STATE.

* CPU WRITE MISS

This cache line got replaced some time after it was loaded into cache. System will simply write the data back to main memory and then keep the latest copy in the cache and set the state still as Reserved.

* SNOOP LOAD HIT

We **see** another device is to read the cached data. Here we don't have to do any cache flush/invalidating behavior. What need to do is to change the cache coherent protocol state to VALID. Question: what would happen if we stay Reserved state? For example, if there is a coming CPU store hit?:-)

* SNOOP WRITE HIT

We **see** a write access issued by another device/CPU. We will invalidate our own cache, which will then become stale and move our state to INVALID. The reason why we don't have to involve in any flushing is simple: Our local cache data value is the same as the one in main memory. Thus, the only thing we need to do is to invalidate our private copy.


分享到:
评论

相关推荐

    Multi-Core Cache Hierarchies

    Future multi-core processors will have many large cache banks connected by a network and shared by many cores. Hence, many important problems must be solved: cache resources must be allocated across ...

    Multi-Relay Non-Orthogonal Multiple Access Network With Cache-Enabled Users.pdf

    文章来自《IEEE Access》期刊。该文研究了具有缓存使能用户的多中继非正交多址(NOMA)网络,其中信源在N个中继的辅助下向两个用户发送叠加信号。我们首先分析了传统的合作NOMA网络,然后讨论了具有缓存使能用户的合作...

    Content Placement in Cache-Enabled Millimeter-Wave Multi-Antenna Dense

    Content Placement in Cache-Enabled Sub-6 GHz and Millimeter-Wave Multi-Antenna Dense Small Cell Networks

    cache-api-1.1.1-API文档-中文版.zip

    赠送jar包:cache-api-1.1.1.jar; 赠送原API文档:cache-api-1.1.1-javadoc.jar; 赠送源代码:cache-api-1.1.1-sources.jar; 赠送Maven依赖信息文件:cache-api-1.1.1.pom; 包含翻译后的API文档:cache-api-...

    cachecloud-bin-1.2.tar.gz 二进制一键安装包,官方版

    总的来说,"cachecloud-bin-1.2.tar.gz"为用户提供了便捷的CacheCloud部署方式,"cachecloud-init.sh"进一步简化了安装流程。结合CacheCloud的特性,我们可以轻松搭建和管理Redis集群,提高运维效率,降低管理复杂性...

    Cache Aware Bi-tier Task-stealing in Multi-socket Multi-core Architecture (icpp11)-计算机科学

    CAB: Cache Aware Bi-tier Task-stealing in Multi-socket Multi-core ...socket Multi-core architecture with shared caches in each socket. However, traditional task-stealing schedulers tend to

    multi-cache:具有多个后端作为选项的灵活缓存接口

    npm install multi-cache 要从 github 安装开发版本,请运行: npm install "git+https://github.com/belbis/multi-cache" 介绍 该项目的目标是为可互换数量的缓存提供高级接口 用法 简单的内存缓存示例: var ...

    cache-api-1.0.0.jar

    《cache-api-1.0.0.jar:JSR107缓存规范解析》 在IT行业中,缓存是优化系统性能的关键技术之一。本文将深入探讨“cache-api-1.0.0.jar”这个软件包,它与JSR107(Java Cache API)最终规范密切相关,为开发者提供了...

    cache-api-1.1.1-API文档-中英对照版.zip

    赠送jar包:cache-api-1.1.1.jar; 赠送原API文档:cache-api-1.1.1-javadoc.jar; 赠送源代码:cache-api-1.1.1-sources.jar; 赠送Maven依赖信息文件:cache-api-1.1.1.pom; 包含翻译后的API文档:cache-api-...

    supercache4.1-server-x32

    "SuperCache4.1-Server-x32"是一款专为32位系统设计的超级缓存服务软件,其目标是显著提高Web服务器的响应速度,降低服务器负载,从而提升整体性能和用户体验。 超级缓存软件的核心功能在于其高效的数据缓存机制。...

    Laravel开发-multi-memcached

    $this->app->singleton('cache.multi-memcached', function ($app) { $stores = []; foreach ($app['config']['cache.stores'] as $store => $config) { if ($config['driver'] === 'memcached') { $connector ...

    proxy-cache-multi-file:代理和缓存多个文件和相关标头

    代理缓存多文件代理和缓存多个文件和相关标头安装这个模块是通过 npm 安装的: npm i proxy-cache-multi-file --save示例用法设置 var proxyCacheMultiFile = require ( 'proxy-cache-multi-file' )...

    cachecloud-bin-1.2.tar.gz

    cachecloud redis管理平台 官方说明文档: https://github.com/sohutv/cachecloud/wiki/3.服务器端接入文档#cc-binary-install

    fastcache-1.1.0-cp27-cp27m-win_amd64.whl

    fastcache-1.1.0-cp27-cp27m-win_amd64.whl 使用安装文章地址:https://blog.csdn.net/qq_36477513/article/details/104779850

    Cache 数据库相关----脚本MUMPS语言

    MUMPS(Massachusetts Universal Multi Programming System)是一种专为Cache数据库设计的脚本语言,主要用于编写存储过程。它作为一种多功能的编程语言,具备多用户、多任务处理能力,并支持灵活的数据结构。对于...

    node-cache-manager-redis-store:使用node_redis的Node-cache-manager的Redis存储

    用于节点缓存管理器的Redis存储Redis...安装npm install cache-manager-redis-store --save 要么yarn add cache-manager-redis-store使用范例请参阅以下示例,了解如何实现Redis缓存存储。单店var cacheManager = requ

    cache-jdbc.jar

    cache-jdbc

    Laravel开发-laravel-multi-memcached

    在本文中,我们将深入探讨如何在 Laravel 开发中使用多台 Memcached 服务器,这是 `Laravel-multi-memcached` 的核心概念。Laravel 是一个流行的 PHP 框架,以其优雅的语法和强大的功能而受到开发者的喜爱。...

    Laravel开发-laravel-page-cache

    `laravel-page-cache` 是一个专门为Laravel应用程序设计的扩展,旨在优化网站速度,通过预先渲染整个页面并存储其内容来减少数据库查询和视图渲染的时间。下面将详细探讨这个话题。 ### Laravel页面缓存简介 ...

Global site tag (gtag.js) - Google Analytics