- 浏览: 982322 次
- 性别:
- 来自: 广州
最新评论
-
qingchuwudi:
有用,非常感谢!
erlang进程的优先级 -
zfjdiamond:
你好 这条命令 在那里输入??
你们有yum 我有LuaRocks -
simsunny22:
这个是在linux下运行的吧,在window下怎么运行escr ...
escript的高级特性 -
mozhenghua:
http://www.erlang.org/doc/apps/ ...
mnesia 分布协调的几个细节 -
fxltsbl:
A new record of 108000 HTTP req ...
Haproxy 1.4-dev2: barrier of 100k HTTP req/s crossed
原文地址:http://streamhacker.com/2008/12/10/how-to-eliminate-mnesia-overload-events/
If you’re using mnesia disc_copies tables and doing a lot of writes all at once, you’ve probably run into the following message
=ERROR REPORT==== 10-Dec-2008::18:07:19 ===
Mnesia(node@host): ** WARNING ** Mnesia is overloaded: {dump_log, write_threshold}
This warning event can get really annoying, especially when they start happening every second. But you can eliminate them, or at least drastically reduce their occurance.
Synchronous Writes
The first thing to do is make sure to use sync_transaction or sync_dirty. Doing synchronous writes will slow down your writes in a good way, since the functions won’t return until your record(s) have been written to the transaction log. The alternative, which is the default, is to do asynchronous writes, which can fill transaction log far faster than it gets dumped, causing the above error report.
Mnesia Application Configuration
If synchronous writes aren’t enough, the next trick is to modify 2 obscure configuration parameters. The mnesia_overload event generally occurs when the transaction log needs to be dumped, but the previous transaction log dump hasn’t finished yet. Tweaking these parameters will make the transaction log dump less often, and the disc_copies tables dump to disk more often. NOTE: these parameters must be set before mnesia is started; changing them at runtime has no effect. You can set them thru the command line or in a config file.
dc_dump_limit
This variable controls how often disc_copies tables are dumped from memory. The default value is 4, which means if the size of the log is greater than the size of table / 4, then a dump occurs. To make table dumps happen more often, increase the value. I’ve found setting this to 40 works well for my purposes.
dump_log_write_threshold
This variable defines the maximum number of writes to the transaction log before a new dump is performed. The default value is 100, so a new transaction log dump is performed after every 100 writes. If you’re doing hundreds or thousands of writes in a short period of time, then there’s no way mnesia can keep up. I set this value to 50000, which is a huge increase, but I have enough RAM to handle it. If you’re worried that this high value means the transaction log will rarely get dumped when there’s very few writes occuring, there’s also a dump_log_time_threshold configuration variable, which by default dumps the log every 3 minutes.
How it Works
I might be wrong on the theory since I didn’t actually write or design mnesia, but here’s my understanding of what’s happening. Each mnesia activity is recorded to a single transaction log. This transaction log then gets dumped to table logs, which in turn are dumped to the table file on disk. By increasing the dump_log_write_threshold, transaction log dumps happen much less often, giving each dump more time to complete before the next dump is triggered. And increasing dc_dump_limit helps ensure that the table log is also dumped to disk before the next transaction dump occurs.
e, 看来我错了.........
会crash掉的.........
错误如下:
eheap_alloc : cannot allocate 298930300 bytes of memory (of type "heap").
This appication has xxx.................
我的代码如下(应该都是尾递归规则把...):
(db:getMax ---什么的可以无视, 自己写的)
调用:hugeWrite(dist, "XXX", 5000000), 大概到377W的时候就崩溃了...
崩溃以后dist.DCD还只有100M左右...
If you’re using mnesia disc_copies tables and doing a lot of writes all at once, you’ve probably run into the following message
=ERROR REPORT==== 10-Dec-2008::18:07:19 ===
Mnesia(node@host): ** WARNING ** Mnesia is overloaded: {dump_log, write_threshold}
This warning event can get really annoying, especially when they start happening every second. But you can eliminate them, or at least drastically reduce their occurance.
Synchronous Writes
The first thing to do is make sure to use sync_transaction or sync_dirty. Doing synchronous writes will slow down your writes in a good way, since the functions won’t return until your record(s) have been written to the transaction log. The alternative, which is the default, is to do asynchronous writes, which can fill transaction log far faster than it gets dumped, causing the above error report.
Mnesia Application Configuration
If synchronous writes aren’t enough, the next trick is to modify 2 obscure configuration parameters. The mnesia_overload event generally occurs when the transaction log needs to be dumped, but the previous transaction log dump hasn’t finished yet. Tweaking these parameters will make the transaction log dump less often, and the disc_copies tables dump to disk more often. NOTE: these parameters must be set before mnesia is started; changing them at runtime has no effect. You can set them thru the command line or in a config file.
dc_dump_limit
This variable controls how often disc_copies tables are dumped from memory. The default value is 4, which means if the size of the log is greater than the size of table / 4, then a dump occurs. To make table dumps happen more often, increase the value. I’ve found setting this to 40 works well for my purposes.
dump_log_write_threshold
This variable defines the maximum number of writes to the transaction log before a new dump is performed. The default value is 100, so a new transaction log dump is performed after every 100 writes. If you’re doing hundreds or thousands of writes in a short period of time, then there’s no way mnesia can keep up. I set this value to 50000, which is a huge increase, but I have enough RAM to handle it. If you’re worried that this high value means the transaction log will rarely get dumped when there’s very few writes occuring, there’s also a dump_log_time_threshold configuration variable, which by default dumps the log every 3 minutes.
How it Works
I might be wrong on the theory since I didn’t actually write or design mnesia, but here’s my understanding of what’s happening. Each mnesia activity is recorded to a single transaction log. This transaction log then gets dumped to table logs, which in turn are dumped to the table file on disk. By increasing the dump_log_write_threshold, transaction log dumps happen much less often, giving each dump more time to complete before the next dump is triggered. And increasing dc_dump_limit helps ensure that the table log is also dumped to disk before the next transaction dump occurs.
评论
3 楼
mryufeng
2010-01-26
warning是没啥问题 为啥warning是个大问题, 你是没想明白 warning的后果...
2 楼
20.Shadow
2010-01-26
会crash掉的.........
错误如下:
eheap_alloc : cannot allocate 298930300 bytes of memory (of type "heap").
This appication has xxx.................
我的代码如下(应该都是尾递归规则把...):
(db:getMax ---什么的可以无视, 自己写的)
调用:hugeWrite(dist, "XXX", 5000000), 大概到377W的时候就崩溃了...
崩溃以后dist.DCD还只有100M左右...
hugeWrite(Table, Name, MaxTimes) -> Fun = fun() -> db:getMax(Table, 0) end, {ok, MaxID} = db:doTrans(Fun), hugeWrite(0, MaxTimes, Table, MaxID + 1, Name). hugeWrite(Index, Max, Table, CurrID, Name) -> case Index < Max of true -> Fun = fun() -> db:write({Table, CurrID, Name}) end, db:doTrans(Fun), io:format("ID = ~p, Name = ~p~n", [CurrID, Name]), hugeWrite(Index + 1, Max, Table, CurrID + 1, Name); false -> io:format("-------- Over ----------~n") end.
1 楼
20.Shadow
2010-01-26
好文.
刚遇到这问题, 想测试下Mnesia对于单表超过2GB容量的情况如何处理,
所以foreach了很多次write()... (还是同步的)
结果就发生了这个overloaded的问题.
不过顺便问下, 这个应该没有关系把, 按照文章意思来说忽略这个warnnig应该也不会造成错误的把?
刚遇到这问题, 想测试下Mnesia对于单表超过2GB容量的情况如何处理,
所以foreach了很多次write()... (还是同步的)
结果就发生了这个overloaded的问题.
不过顺便问下, 这个应该没有关系把, 按照文章意思来说忽略这个warnnig应该也不会造成错误的把?
发表评论
-
OTP R14A今天发布了
2010-06-17 14:36 2677以下是这次发布的亮点,没有太大的性能改进, 主要是修理了很多B ... -
R14A实现了EEP31,添加了binary模块
2010-05-21 15:15 3030Erlang的binary数据结构非常强大,而且偏向底层,在作 ... -
如何查看节点的可用句柄数目和已用句柄数
2010-04-08 03:31 4814很多同学在使用erlang的过程中, 碰到了很奇怪的问题, 后 ... -
获取Erlang系统信息的代码片段
2010-04-06 21:49 3475从lib/megaco/src/tcp/megaco_tcp_ ... -
iolist跟list有什么区别?
2010-04-06 20:30 6529看到erlang-china.org上有个 ... -
erlang:send_after和erlang:start_timer的使用解释
2010-04-06 18:31 8386前段时间arksea 同学提出这个问题, 因为文档里面写的很不 ... -
Latest news from the Erlang/OTP team at Ericsson 2010
2010-04-05 19:23 2013参考Talk http://www.erlang-factor ... -
对try 异常 运行的疑问,为什么出现两种结果
2010-04-05 19:22 2842郎咸武<langxianzhe@163.com> ... -
Erlang ERTS Async基础设施
2010-03-19 00:03 2517其实Erts的Async做的很不错的, 相当的完备, 性能又高 ... -
CloudI 0.0.9 Released, A Cloud as an Interface
2010-03-09 22:32 2476基于Erlang的云平台 看了下代码 质量还是不错的 完成了不 ... -
Memory matters - even in Erlang (再次说明了了解内存如何工作的必要性)
2010-03-09 20:26 3439原文地址:http://www.lshift.net/blog ... -
Some simple examples of using Erlang’s XPath implementation
2010-03-08 23:30 2050原文地址 http://www.lshift.net/blog ... -
lcnt 环境搭建
2010-02-26 16:19 2614抄书:otp_doc_html_R13B04/lib/tool ... -
Erlang强大的代码重构工具 tidier
2010-02-25 16:22 2486Jan 29, 2010 We are very happy ... -
[Feb 24 2010] Erlang/OTP R13B04 has been released
2010-02-25 00:31 1387Erlang/OTP R13B04 has been rele ... -
R13B04 Installation
2010-01-28 10:28 1390R13B04后erlang的源码编译为了考虑移植性,就改变了编 ... -
Running tests
2010-01-19 14:51 1486R13B03以后 OTP的模块加入了大量的测试模块,这些模块都 ... -
R13B04在细化Binary heap
2010-01-14 15:11 1508从github otp的更新日志可以清楚的看到otp R13B ... -
R13B03 binary vheap有助减少binary内存压力
2009-11-29 16:07 1668R13B03 binary vheap有助减少binary内存 ... -
erl_nif 扩展erlang的另外一种方法
2009-11-26 01:02 3218我们知道扩展erl有2种方法, driver和port. 这2 ...
相关推荐
436 pages Publisher: Apress; 1 edition (July 3, 2013) Language: English ISBN-10: 1430249501 ISBN-13: 978-1430249504 What you’ll learn ...How to eliminate memory leaks and poor-performing code
信息安全_数据安全_How_to_Eliminate_a_Major_Vulnera 隐私合规 边界防御 安全防护 web安全 工业网络
在数据分析和报告生成中,有时我们可能会遇到Excel表格中存在隐藏行、列或者合并单元格的情况,这在处理数据时可能会带来不便。SQL Server Reporting Services(SSRS)是一款强大的报表工具,它允许用户创建交互式、...
介紹一個新的電源網路概念 可以有效的改善SSN
Architectural challenges using microservices with service integration and API management are presented and you learn how to eliminate the use of centralized integration products such as the enterprise...
Not only will you be able to boost your performance, but you will also find out how to eliminate common loop problems. By the end of the book, you will know a wide variety of new techniques that you ...
Architectural challenges using microservices with service integration and API management are presented and you learn how to eliminate the use of centralized integration products such as the enterprise...
This practical book shows you how to eliminate common CSS pain points and concentrate on making your pages pop. You’ll begin with simple topics like CSS resets and then progress to more substantial ...
Testing lessons show you how to eliminate cross-browser JavaScript errors and DOM debugging nightmares using a combination of Firebug, and Venkman. Advanced material explains the most current design ...
Clean and format data to eliminate duplicates and errors in your datasets Learn when to standardize data and when to test and script data cleanup Explore and analyze your datasets with new Python ...
Not only will you be able to boost your performance, but you will also find out how to eliminate common loop problems. By the end of the book, you will know a wide variety of new techniques that you ...
Chapter 5, Network and Security, explains how to allow inbound connection to access Kubernetes services and how default networking works in Kubernetes. External access to our services is necessary for...
Chapter 5, Network and Security, explains how to allow inbound connection to access Kubernetes services and how default networking works in Kubernetes. External access to our services is necessary for...
- **Eliminating Duplicates by Using PROC SORT**: Explains how to eliminate duplicate records using PROC SORT. - **Detecting Duplicates by Using DATA Step Approaches**: Describes various DATA step ...
Ways to take advantage of the method extraction to eliminate duplicated code How to make code simpler, easier to modify, and more understandable All about object oriented theory and design patterns ...
The same system shows how OP “capabilities” can eliminate much of the need for inheritance. For experienced JavaScript programmers, this book shows why most of the old views of JavaScript’s ...
How to configure syntax highlighting to highlight different file types automatically Project Settings Advanced Project Features - Using the UltraEdit/UEStudio project settings dialog Scripting ...
It is the first C++ book that actually demonstrates how to design large systems, and one of the few books on object-oriented design specifically geared to practical aspects of the C++ programming ...