Mutex vs Semaphore
Mutex:
Is a key to a toilet. One person can have the key – occupy the toilet – at the time. When finished, the person gives (frees) the key to the next person in the queue.
Officially: “Mutexes are typically used to serialise access to a section of re-entrant code that cannot be executed concurrently by more than one thread. A mutex object only allows one thread into a controlled section, forcing other threads which attempt to gain access to that section to wait until the first thread has exited from that section.”
Ref: Symbian Developer Library
Semaphore:
Is the number of free identical toilet keys. Example, say we have four toilets with identical locks and keys. The semaphore count – the count of keys – is set to 4 at beginning (all four toilets are free), then the count value is decremented as people are coming in. If all toilets are full, ie. there are no free keys left, the semaphore count is 0. Now, when eq. one person leaves the toilet, semaphore is increased to 1 (one free key), and given to the next person in the queue.
Officially: “A semaphore restricts the number of simultaneous users of a shared resource up to a maximum number. Threads can request access to the resource (decrementing the semaphore), and can signal that they have finished using the resource (incrementing the semaphore).”
Difference Between Semaphores and Mutex
After reading though the material above, some pretty clear distinctions should have emerged. However, I’d like to reiterate those differences again here, along with some other noticeable differences between semaphore and Mutex.
- A semaphore can be a Mutex but a Mutex can never be semaphore. This simply means that a binary semaphore can be used as Mutex, but a Mutex can never exhibit the functionality of semaphore.
- Both semaphores and Mutex (at least the on latest kernel) are non-recursive in nature.
- No one owns semaphores, whereas Mutex are owned and the owner is held responsible for them. This is an important distinction from a debugging perspective.
- In case the of Mutex, the thread that owns the Mutex is responsible for freeing it. However, in the case of semaphores, this condition is not required. Any other thread can signal to free the semaphore by using the
sem_post()
function.
- A Mutex, by definition, is used to serialize access to a section of re-entrant code that cannot be executed concurrently by more than one thread. A semaphore, by definition, restricts the number of simultaneous users of a shared resource up to a maximum number
- Another difference that would matter to developers is that semaphores are system-wide and remain in the form of files on the filesystem, unless otherwise cleaned up. Mutex are process-wide and get cleaned up automatically when a process exits.
- The nature of semaphores makes it possible to use them in synchronizing related and unrelated process, as well as between threads. Mutex can be used only in synchronizing between threads and at most between related processes (the pthread implementation of the latest kernel comes with a feature that allows Mutex to be used between related process).
- According to the kernel documentation, Mutex are lighter when compared to semaphores. What this means is that a program with semaphore usage has a higher memory footprint when compared to a program having Mutex.
- From a usage perspective, Mutex has simpler semantics when compared to semaphores.
- A Mutex allows serial access to a resource, whereas semaphores, in addition to allowing serial access, could also be used to access resources in parallel. For example, consider resource R being accessed by n number of users. When using a Mutex, we would need a Mutex “m” to lock and unlock the resource, thus allowing only one user at a time to use the resource R. In contrast, semaphores can allow n number of users to synchronously access the resource R.
- The advantage of semaphores over other synchronization mechanisms is that they can be used to synchronize two related or unrelated processes trying to access the same resource.
As per operating system terminology, the mutex and semaphore are kernel resources that provide synchronization services (also called as synchronization primitives). Why do we need such synchronization primitives? Won’t be only one sufficient? To answer these questions, we need to understand few keywords. Please read the posts on atomicity and critical section. We will illustrate with examples to understand these concepts well, rather than following usual OS textual description.
The producer-consumer problem:
Note that the content is generalized explanation. Practical details will vary from implementation.
Consider the standard producer-consumer problem. Assume, we have a buffer of 4096 byte length. A producer thread will collect the data and writes it to the buffer. A consumer thread will process the collected data from the buffer. Objective is, both the threads should not run at the same time.
Using Mutex:
A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and proceed with their work. As long as the buffer is filled by producer, the consumer needs to wait, and vice versa.
At any point of time, only one thread can work with the entire buffer. The concept can be generalized using semaphore.
Using Semaphore:
A semaphore is a generalized mutex. In lieu of single buffer, we can split the 4 KB buffer into four 1 KB buffers (identical resources). A semaphore can be associated with these four buffers. The consumer and producer can work on different buffers at the same time.
Misconception:
There is an ambiguity between binary semaphore and mutex. We might have come across that a mutex is binary semaphore. But they are not! The purpose of mutex and semaphore are different. May be, due to similarity in their implementation a mutex would be referred as binary semaphore.
Strictly speaking, a mutex is locking mechanism used to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there will be ownership associated with mutex, and only the owner can release the lock (mutex).
Semaphore is signaling mechanism (“I am done, you can carry on” kind of signal). For example, if you are listening songs (assume it as one task) on your mobile and at the same time your friend called you, an interrupt will be triggered upon which an interrupt service routine (ISR) will signal the call processing task to wakeup.
General Questions:
1. Can a thread acquire more than one lock (Mutex)?
Yes, it is possible that a thread will be in need of more than one resource, hence the locks. If any lock is not available the thread will wait (block) on the lock.
2. Can a mutex be locked more than once?
A mutex is a lock. Only one state (locked/unlocked) is associated with it. However, a recursive mutex can be locked more than once (POSIX complaint systems), in which a count is associated with it, yet retains only one state (locked/unlocked). The programmer must unlock the mutex as many number times as it was locked.
3. What will happen if a non-recursive mutex is locked more than once.
Deadlock. If a thread which had already locked a mutex, tries to lock the mutex again, it will enter into the waiting list of that mutex, which results in deadlock. It is because no other thread can unlock the mutex. An operating system implementer can exercise care in identifying the owner of mutex and return if it is already locked by same thread to prevent deadlocks.
4. Are binary semaphore and mutex same?
No. We will suggest to treat them separately, as it was explained signalling vs locking mechanisms. But a binary semaphore may experience the same critical issues (e.g. priority inversion) associated with mutex. We will cover these later article.
A programmer can prefer mutex rather than creating a semaphore with count 1.
5. What is a mutex and critical section?
Some operating systems use the same word critical section in the API. Usually a mutex is costly operation due to protection protocols associated with it. At last, the objective of mutex is atomic access. There are other ways to achieve atomic access like disabling interrupts which can be much faster but ruins responsiveness. The alternate API makes use of disabling interrupts.
6. What are events?
The semantics of mutex, semaphore, event, critical section, etc… are same. All are synchronization primitives. Based on their cost in using them they are different. We should consult the OS documentation for exact details.
7. Can we acquire mutex/semaphore in an Interrupt Service Routine?
An ISR will run asynchronously in the context of current running thread. It is not recommended to query (blocking call) the availability of synchronization primitives in an ISR. The ISR are meant be short, the call to mutex/semaphore may block the current running thread. However, an ISR can signal a semaphore or unlock a mutex.
8. What we mean by “thread blocking on mutex/semaphore” when they are not available?
Every synchronization primitive will have waiting list associated with it. When the resource is not available, the requesting thread will be moved from the running list of processor to the waiting list of the synchronization primitive. When the resource is available, the higher priority thread on the waiting list will get resource (more precisely, it depends on the scheduling policies).
9. Is it necessary that a thread must block always when resource is not available?
Not necessarily. If the design is sure ‘what has to be done when resource is not available‘, the thread can take up that work (a different code branch). To support application requirements the OS provides non-blocking API.
For example POSIX pthread_mutex_trylock() API. When the mutex is not available the function will return immediately where as the API pthread_mutex_lock() will block the thread till resource is available.
Nice articles on the topic:
From part 2:
The mutex is similar to the principles of the binary semaphore with one significant difference: the principle of ownership. Ownership is the simple concept that when a task locks (acquires) a mutex only it can unlock (release) it. If a task tries to unlock a mutex it hasn’t locked (thus doesn’t own) then an error condition is encountered and, most importantly, the mutex is not unlocked. If the mutual exclusion object doesn't have ownership then, irrelevant of what it is called, it is not a mutex.
From:
http://www.geeksforgeeks.org/mutex-vs-semaphore/
http://www.lessons99.com/mutex-vs-semaphore.html
相关推荐
风光储直流微电网Simulink仿真模型:光伏发电、风力发电与混合储能系统的协同运作及并网逆变器VSR的研究,风光储直流微电网Simulink仿真模型:MPPT控制、混合储能系统、VSR并网逆变器的设计与实现,风光储、风光储并网直流微电网simulink仿真模型。 系统由光伏发电系统、风力发电系统、混合储能系统(可单独储能系统)、逆变器VSR?大电网构成。 光伏系统采用扰动观察法实现mppt控制,经过boost电路并入母线; 风机采用最佳叶尖速比实现mppt控制,风力发电系统中pmsg采用零d轴控制实现功率输出,通过三相电压型pwm变器整流并入母线; 混合储能由蓄电池和超级电容构成,通过双向DCDC变器并入母线,并采用低通滤波器实现功率分配,超级电容响应高频功率分量,蓄电池响应低频功率分量,有限抑制系统中功率波动,且符合储能的各自特性。 并网逆变器VSR采用PQ控制实现功率入网。 ,风光储; 直流微电网; simulink仿真模型; 光伏发电系统; 最佳叶尖速比控制; MPPT控制; Boost电路; 三相电压型PWM变换器;
以下是针对初学者的 **51单片机入门教程**,内容涵盖基础概念、开发环境搭建、编程实践及常见应用示例,帮助你快速上手。
【Python毕设】根据你提供的课程代码,自动排出可行课表,适用于西工大选课_pgj
【毕业设计】[零食商贩]-基于vue全家桶+koa2+sequelize+mysql搭建的移动商城应用
电动汽车充电背景下的微电网谐波抑制策略与风力发电系统仿真研究,电动汽车充电微电网的谐波抑制策略与风力发电系统仿真研究,基于电动汽车充电的微电网谐波抑制策略研究,包括电动汽车充电负 载模型,风电模型,光伏发现系统,储能系统,以及谐波处理模块 风力发电系统仿真 ,电动汽车充电负载模型; 风电模型; 光伏发现系统; 储能系统; 谐波处理模块; 风力发电系统仿真,电动汽车充电微电网的谐波抑制策略研究:整合负载模型、风电模型与光伏储能系统
Vscode部署本地Deepseek的continue插件windows版本
内容概要:本文详细介绍了滤波器的两个关键参数——截止频率(F0)和品质因素(Q),并探讨了不同类型的滤波器(包括低通、高通、带通和带阻滤波器)的设计方法及其特性。文章首先明确了F0和Q的基本概念及其在滤波器性能中的作用,接着通过数学推导和图形展示的方式,解释了不同Q值对滤波器频率响应的影响。文中特别指出,通过调整Q值可以控制滤波器的峰谷效果和滚降速度,进而优化系统的滤波性能。此外,还讨论了不同类型滤波器的具体应用场景,如低通滤波器适用于消除高频噪声,高通滤波器用于去除直流分量和低频干扰,而带通滤波器和带阻滤波器分别用于选取特定频段信号和排除不需要的频段。最后,通过对具体案例的解析,帮助读者更好地理解和应用相关理论。 适合人群:电子工程及相关领域的技术人员、研究人员以及高校学生,特别是那些需要深入了解滤波器设计原理的人群。 使用场景及目标:适用于从事模拟电路设计的专业人士,尤其是希望掌握滤波器设计细节和技术的应用场合。目标是让读者能够灵活运用Q值和F0来优化滤波器设计,提升系统的信噪比和选择性,确保信号的纯净性和完整性。
内容概要:本文主要讲述了利用QUARTUSⅡ进行电子设计自动化的具体步骤和实例操作,详细介绍了如何利用EDA技术在QUARTUSⅡ环境中设计并模拟下降沿D触发器的工作过程,重点探讨了系统规格设计、功能描述、设计处理、器件编译和测试四个步骤及相关的设计验证流程,如功能仿真、逻辑综合及时序仿真等内容,并通过具体的操作指南展示了电路设计的实际操作方法。此外还强调了QUARTUSⅡ作为一款集成了多种功能的综合平台的优势及其对于提高工作效率的重要性。 适用人群:电子工程、自动化等相关专业的学生或者工程师,尤其适用于初次接触EDA技术和QuartusⅡ的用户。 使用场景及目标:旨在帮助用户理解和掌握使用QUARTUSⅡ这一先进的EDA工具软件进行从概念设计到最后成品制作整个电路设计过程的方法和技巧。目标是在实际工作中能够熟练运用QUARTUSⅡ完成各类复杂电子系统的高效设计。 其他说明:文中通过具体的案例让读者更直观理解EDA设计理念和技术特点的同时也为进一步探索EDA领域的前沿课题打下了良好基础。此外它还提到了未来可能的发展方向,比如EDA工具的功能增强趋势等。
Simulink建模下的光储系统与IEEE33节点配电网的协同并网运行:光照强度变化下的储能系统优化策略与输出性能分析,Simulink模型下的光伏微网系统:光储协同,实现380v电压等级下的恒定功率并网与平抑波动,Simulink含光伏的IEEE33节点配电网模型 微网,光储系统并网运行 光照强度发生改变时,储能可以有效配合光伏进行恒定功率并网,平抑波动,实现削峰填谷。 总的输出有功为270kw(图23) 无功为0 检验可以并网到电压等级为380v的电网上 逆变侧输出电压电流稳定(图4) ,Simulink; 含光伏; 配电网模型; 微网; 光储系统; 储能配合; 恒定功率并网; 电压等级; 逆变侧输出。,Simulink光伏微网模型:光储协同并网运行,实现功率稳定输出
基于Andres ELeon新法的双馈风机次同步振荡抑制策略:附加阻尼控制(SDC)的实践与应用,双馈风机次同步振荡的抑制策略研究:基于转子侧附加阻尼控制(SDC)的应用与效能分析,双馈风机次同步振荡抑制策略(一) 含 基于转子侧附加阻尼控制(SDC)的双馈风机次同步振荡抑制,不懂就问, 附加阻尼控制 (SDC)被添加到 RSC 内部控制器的q轴输出中。 这种方法是由Andres ELeon在2016年提出的。 该方法由增益、超前滞后补偿器和带通滤波器组成。 采用实测的有功功率作为输入信号。 有关更多信息,你可以阅读 Andres ELeon 的lunwen。 附lunwen ,关键词:双馈风机、次同步振荡、抑制策略;转子侧附加阻尼控制(SDC);RSC内部控制器;Andres ELeon;增益;超前滞后补偿器;带通滤波器;实测有功功率。,双馈风机次同步振荡抑制技术:基于SDC与RSCq轴控制的策略研究
springboot疫情防控期间某村外出务工人员信息管理系统--
高效光伏并网发电系统MATLAB Simulink仿真设计与MPPT技术应用及PI调节闭环控制,光伏并网发电系统MATLAB Simulink仿真设计:涵盖电池、BOOST电路、逆变电路及MPPT技术效率提升,光伏并网发电系统MATLAB Simulink仿真设计。 该仿真包括电池,BOOST升压电路,单相全桥逆变电路,电压电流双闭环控制部分;应用MPPT技术,提高光伏发电的利用效率。 采用PI调节方式进行闭环控制,SPWM调制,采用定步长扰动观测法,对最大功率点进行跟踪,可以很好的提高发电效率和实现并网要求。 ,光伏并网发电系统; MATLAB Simulink仿真设计; 电池; BOOST升压电路; 单相全桥逆变电路; 电压电流双闭环控制; MPPT技术; PI调节方式; SPWM调制; 定步长扰动观测法。,光伏并网发电系统Simulink仿真设计:高效MPPT与PI调节控制策略
PFC 6.0高效循环加载系统:支持半正弦、半余弦及多级变荷载功能,PFC 6.0循环加载代码:支持半正弦、半余弦及多级变荷载的强大功能,PFC6.0循环加载代码,支持半正弦,半余弦函数加载,中间变荷载等。 多级加载 ,PFC6.0; 循环加载代码; 半正弦/半余弦函数加载; 中间变荷载; 多级加载,PFC6.0多级半正弦半余弦循环加载系统
某站1K的校园跑腿小程序 多校园版二手市场校园圈子失物招领 食堂/快递代拿代买跑腿 多校版本,多模块,适合跑腿,外卖,表白,二手,快递等校园服务 需要自己准备好后台的服务器,已认证的小程序,备案的域名!
【Python毕设】根据你提供的课程代码,自动排出可行课表,适用于西工大选课
COMSOL锂枝晶模型:五合一的相场、浓度场与电场模拟研究,涵盖单枝晶定向生长、多枝晶生长及无序生长等多元现象的探索,COMSOL锂枝晶模型深度解析:五合一技术揭示单枝晶至雪花枝晶的生长机制与物理场影响,comsol锂枝晶模型 五合一 单枝晶定向生长、多枝晶定向生长、多枝晶随机生长、无序生长随机形核以及雪花枝晶,包含相场、浓度场和电场三种物理场(雪花枝晶除外),其中单枝晶定向生长另外包含对应的参考文献。 ,comsol锂枝晶模型; 五合一模型; 单枝晶定向生长; 多枝晶定向生长; 多枝晶随机生长; 无序生长随机形核; 雪花枝晶; 相场、浓度场、电场物理场; 参考文献,COMSOL锂枝晶模型:多场景定向生长与相场电场分析
嵌入式大学生 点阵代码
那个有delphi12 tedgebrowser 使用的dll
基于DQN算法的微网储能优化调度与能量管理:深度强化学习的应用与实践,基于DQN算法的微网储能优化调度与能量管理:深度强化学习的应用与实践,基于DQN算法的微网储能运行优化与能量管理 关键词:微网 优化调度 储能优化 深度强化学习 DQN 编程语言:python 参考文献:《Explainable AI Deep Reinforcement Learning Agents for Residential Demand Side Cost Savings in Smart Grids》 内容简介: 受深层强化学习(RL)最新进展的激励,我们开发了一个RL代理来管理家庭中存储设备的操作,旨在最大限度地节省需求侧的成本。 所提出的技术是数据驱动的,并且RL代理从头开始学习如何在可变费率结构下有效地使用能量存储设备,即收缩“黑匣子”的概念,其中代理所学的技术被忽略。 我们解释了RL-agent的学习过程,以及基于存储设备容量的策略。 ,微网; 优化调度; 储能优化; 深度强化学习; DQN; 家庭存储设备; 需求侧成本节省; 智能电网; RL代理; 能量存储设备。,基于DQN算法的微网储
内容概要:该文档为FM17580的原理图设计文件,重点介绍了这款非接触式IC卡读写芯片的电路设计细节。文档详细列出了各个元器件及其连接方式、引脚分配及具体值设定。特别值得注意的是,为了确保性能和可靠性,在PCB布局时强调了GND线需要尽量以最短路径连回FM175xx芯片的TVSS引脚附近,并且靠近电源输入端(TVDD)。同时明确了FM17580只兼容SPI通讯协议,其他如IIC或UART选项则不在支持范围内。此外还提供了关于降低能耗的选择——移除不必要的ADC检测电路,这对于一些特定应用场景非常有用。 适合人群:具备硬件开发经验和RFID/NFC领域基础知识的技术人员或研究人员。 使用场景及目标:适用于需要详细了解FM17580内部结构和技术特性的项目团队;旨在帮助工程师们快速上手搭建实验平台并测试FM17580的功能特性。主要目的是为实际应用开发提供技术支持和参考。 其他说明:文档最后附带了一些附加信息,包括设计师名字、公司名称以及审查流程的相关内容,但具体内容并未公开。此外还提到该文档是针对FM17580评估板(即FM17580Demo)的设计图纸。文中出现多次类似表格可能是不同版本之间的对比或者记录修改历史的部分内容。