`
mryufeng
  • 浏览: 982405 次
  • 性别: Icon_minigender_1
  • 来自: 广州
社区版块
存档分类
最新评论

Mnesia consumption

阅读更多
原文地址:http://www.erlang.org/~hakan/
知道这个就可以预估mnesia的资源消耗哦

Mnesia introduces no hard system limits on its own. The hard limits
can be found in the underlaying system(s), such as the Erlang Run Time
System (ERTS) and the operating system.

At startup Mnesia consumes a small amount of fixed resources,
regardless of how the system is used. A few number of static processes
are spawned and a few number of ets tables are created to host the
meta data of the database. If Mnesia is configured to use disk, a
disk_log file is opened. In the following we will discuss the dynamic
nature of Mnesia, such as:

   o memory consumtion
   o system limits
   o disk consumption

Memory consumption in Mnesia:
-----------------------------
Mnesia itself does not consume especially much memory and we do not
discuss this issue here. The resources consumed by Mnesia is affected
by many factors that depend of the usage of Mnesia:

   o number of tables
   o number of records per table
   o size of each record
   o table type (set or bag)
   o number of indices per table (including snmp "index")
   o size of key and attributes stored in index
   o number of replicas per table
   o replica storage type
   o number of simultaneous transactions
   o duration of transactions
   o number of updates per transaction
   o size of each updated record
   o frequency of dumping of transaction log
   o duration of checkpoints

A table in Mnesia may be replicated to several nodes.
The storage type of each replica may vary from node to node.
Currently Mnesia support three storage types:

   o ram_copies - resides in ets tables only
   o disc_only_copies - resides in dets tables only
   o disc_copies - resides in both ets and dets tables

The records that applications store in Mnesia are stored in ets and
dets respectively. Mnesia does not add any extra memory consumption of
its own. The record is directly stored in ets and/or dets.  ets tables
resides in primary memory. dets tables resides on disk.  Read about
their memory consumption elsewhere (stdlib). When records are read from
ets and dets tables a deep copy of the entire records is performed. In
dets the copy is performed twice, first the records is read from the
file and allocated on the heap of the dets process, then the record is
sent to the process that ordered the lookup (your application process)
and allocated on its heap.

A table may have indices. Each index is represented as a table
itself. In an index table, tuples of {Attribute, PrimaryKey} are
stored. Indices for disc_only_copies are stored in dets only.  Indices
for ram_copies and disc_copies are stored in ets only.  Indecies are
currently implemented as bags, which means that the performance may
degenerate if the indexed attribute has the same value in lots of
records in the table. bags with many records with the same key should
be avoided.

A table may be accessible via SNMP. If that feature is activated, a
special kind of index is attached to the table. In the "index table",
tuples of {SnmpKey, PrimaryKey} is stored. The SnmpKey is represented
as a flat string according to the SNMP standard. Read about SNMP
elsewhere (snmpea). SNMP indices are stored in bplus_tree only. Read
about memory consumption in bplus_tree elsewhere (stdlib).

Tables may be updated in three contexts:

   o transaction
   o dirty (sync or async)
   o ets

Updates performed in transactions are recorded in an ets table local
to the transaction. This means that updated records are stored both in
the local transaction storage and the main table, where the committed
records resides. (They is also stored on the heap until the garbage
collector decides to remove them). Info about other updating
operations are also stored in the local storage (i.e. write, delete,
delete_object). The local storage is erased at the end of the
outermost transaction. If the duration of the outermost transaction is
long, e.g. because of lock conflicts, or the total size of the updated
records is large the memory consumption may become an
issue. Especially if the table is replicated, since the contents of
the local storage is processed (and temporary allocated on the heap)
and parts of it is sent to the other involved nodes as a part of the
commit protocol.

Both read and update accesses in transactions requires locks
to be acquired. Information about the locks are both stored in
the local transaction storage and in private storage administered by
the lock manager. This information is erased at the end of the
outermost transaction. The memory consumption for the lock info is
affected by the number of records accessed in a transaction and the
duration of the outer transaction. If Mnesia detects a potential
deadlock, it selects a deadlock victim and restarts its outer
transaction. Its lock info and local storage is then erased.

Checkpoints in Mnesia is a feature that retains the old state of one
or more tables even if the tables are updated. If a table that is a
part of a checkpoint is updated, the old record value (may be several
records if the table is a bag) will be stored in a checkpoint retainer
for the duration of the checkpoint. If the same record is updated
several times, only oldest record version is kept in the retainer. For
disc_only_copies replicas the checkpoint retainer is a dets table. For
ram_copies and disc_copies the retainer is an ets table. Read about
the memory consumption about ets and dets elsewhere (stdlib).

Dirty updates are not performed in a process local storage, the
updates are performed directly in the main table containing the
committed records. Dirty accesses does not bother about transactions
or lock management.

Raw ets updates (performed inside mnesia:ets/1) assumes that the table
is an ets table and that it is local. The raw ets updates are
performed directly in the table containing the committed records.  The
raw ets updates does not update indices, checkpoint retainers,
subscriptions etc. like the dirty accesses and transaction protected
accesses do.

System limits:
--------------
There are two system limits that may be worthwhile to think about:

   o number of ets tables
   o number of open file descriptors (ports)

There is an upper system limit of the maximum number of ets tables
that may co-exist in an Erlang node. That limit may be reached if the
number of tables is large, especially if they have several indices or
if there are several checkpoints attached to them. The limit may also
be reached if the number of simultaneous transactions becomes large.
Read about the actual limit elsewhere (stdlib).

There is an upper limit of the maximum number of open file descriptors
(ports) in an Erlang node. That limit may be reached if the number of
disk resident Mnesia tables is large. Each replica whose storage type
is disc_only_copies occupies constantly a file descriptor (held by the
dets server). Updates that involves Mnesia tables residing on disk
(disc_copies and disc_only_copies) are written into the transaction
log. The records in the transaction log is propagated (dumped) to the
corresponding dets files at regular intervals. The dump log interval
is configurable. During a dump of the transaction log the dets files
corresponding to the disc_copies tables is opened.

We strongly recommend the Mnesia directory to reside on a local disk.
There are several reasons for this. Firstly the access time of files
on a local disk, normally is faster than access of remotely mounted
disks. The access time is independent of network traffic an load on
other computers. Secondly the the whole emulator may be blocked while
I/O is performed, if the remote file system daemons does not respond
for some reason the emulator may be blocked quite a while.

Disk consumption:
-----------------
Updates that involves Mnesia tables residing on disk (disc_copies and
disc_only_copies) are written into the transaction log. In the log,
the same record may occur any number of times. Each update causes the
new records to be appended to the log, regardless if they are already
stored in the log or not. If the same disk resident record is updated
over and over again it becomes really important to configure Mnesia to
dump the transaction log rather frequently, in order to reduce the
disk consumption.

The transaction log is implemented with disk_log (stdlib). Appending a
commit record to disk_log is faster than reading the same commit
record and update the corresponding dets files. This means that the
transaction log may grow faster than the dumper is able to consume the
log. When that happens, a special overload event is sent to Mnesia's
event handler. If you suspect that this may become an issue for you,
(i.e. the application performs lots of updates in disk resident
tables) you should subscribe to Mnesia's system events, and reduce the
load on Mnesia when the events occurs. When it is time to dump the log
(i.e. any of the dump log thresholds has been reached) the log file is
renamed and a new empty one is created. This means that the disk space
occupied by the transaction log may be twofold at the end of the dump
if the applications are very update intense.

Normally Mnesia updates the dets files in place during the dump log
activity. But there is possible to configure Mnesia first make a copy
of the entire (possible large) dets file and apply the log to the
copy. When the dump is done the old file is removed and the new one is
renamed. This feature may cause the disk space occupied by dets tables
to occationally be doubled.

If Mnesia encounters a fatal error Mnesia is shutdown and a core dump
is generated. The core dump consists of all log files, the schema, lock
info etc. so it may be quite large. By default the core dump is
written to file on current directory. It may be a good idea to look
for such files, now and then and get rid of them or install an event
handler which takes care of the core dump binary.

Hakan Mattsson <hakan@erix.ericsson.se>

分享到:
评论

相关推荐

    yolo算法-电线杆数据集-1493张图像带标签-.zip

    yolo算法-电线杆数据集-1493张图像带标签-.zip;yolo算法-电线杆数据集-1493张图像带标签-.zip;yolo算法-电线杆数据集-1493张图像带标签-.zip

    yolo算法-电线杆数据集-7255张图像带标签-杆顶.zip

    yolo算法-电线杆数据集-7255张图像带标签-杆顶.zip;yolo算法-电线杆数据集-7255张图像带标签-杆顶.zip;yolo算法-电线杆数据集-7255张图像带标签-杆顶.zip;yolo算法-电线杆数据集-7255张图像带标签-杆顶.zip

    pillow_avif_plugin-1.2.1-cp37-cp37m-win32.whl.rar

    python whl离线安装包 pip安装失败可以尝试使用whl离线安装包安装 第一步 下载whl文件,注意需要与python版本配套 python版本号、32位64位、arm或amd64均有区别 第二步 使用pip install XXXXX.whl 命令安装,如果whl路径不在cmd窗口当前目录下,需要带上路径 WHL文件是以Wheel格式保存的Python安装包, Wheel是Python发行版的标准内置包格式。 在本质上是一个压缩包,WHL文件中包含了Python安装的py文件和元数据,以及经过编译的pyd文件, 这样就使得它可以在不具备编译环境的条件下,安装适合自己python版本的库文件。 如果要查看WHL文件的内容,可以把.whl后缀名改成.zip,使用解压软件(如WinRAR、WinZIP)解压打开即可查看。 为什么会用到whl文件来安装python库文件呢? 在python的使用过程中,我们免不了要经常通过pip来安装自己所需要的包, 大部分的包基本都能正常安装,但是总会遇到有那么一些包因为各种各样的问题导致安装不了的。 这时我们就可以通过尝试去Python安装包大全中(whl包下载)下载whl包来安装解决问题。

    【地震】基于matlab时域有限差分FDTD模拟地震盾构【含Matlab源码 9186期】.mp4

    Matlab领域上传的视频均有对应的完整代码,皆可运行,亲测可用,适合小白; 1、代码压缩包内容 主函数:main.m; 调用函数:其他m文件;无需运行 运行结果效果图; 2、代码运行版本 Matlab 2019b;若运行有误,根据提示修改;若不会,私信博主; 3、运行操作步骤 步骤一:将所有文件放到Matlab的当前文件夹中; 步骤二:双击打开main.m文件; 步骤三:点击运行,等程序运行完得到结果; 4、仿真咨询 如需其他服务,可私信博主; 4.1 博客或资源的完整代码提供 4.2 期刊或参考文献复现 4.3 Matlab程序定制 4.4 科研合作

    中国高质量发展指标体系-最新发布.zip

    中国高质量发展指标体系-最新发布.zip

    【java毕业设计】班级同学录管理系统源码(ssm+mysql+说明文档).zip

    环境说明: 开发语言:Java 框架:ssm,mybatis JDK版本:JDK1.8 数据库:mysql 5.7及以上 数据库工具:Navicat11及以上 开发软件:eclipse/idea Maven包:Maven3.3及以上 服务器:tomcat7及以上

    【java毕业设计】基于Java的汽车销售系统源码(ssm+mysql+说明文档).zip

    环境说明: 开发语言:Java 框架:ssm,mybatis JDK版本:JDK1.8 数据库:mysql 5.7及以上 数据库工具:Navicat11及以上 开发软件:eclipse/idea Maven包:Maven3.3及以上 服务器:tomcat7及以上

    yolo算法-动物检测数据集-3948张图像带标签.zip

    yolo系列算法目标检测数据集,包含标签,可以直接训练模型和验证测试,数据集已经划分好,包含数据集配置文件data.yaml,适用yolov5,yolov8,yolov9,yolov7,yolov10,yolo11算法; 包含两种标签格:yolo格式(txt文件)和voc格式(xml文件),分别保存在两个文件夹中; yolo格式:<class> <x_center> <y_center> <width> <height>, 其中: <class> 是目标的类别索引(从0开始)。 <x_center> 和 <y_center> 是目标框中心点的x和y坐标,这些坐标是相对于图像宽度和高度的比例值,范围在0到1之间。 <width> 和 <height> 是目标框的宽度和高度,也是相对于图像宽度和高度的比例值

    pandas-1.3.5-pp38-pypy38_pp73-win_amd64.whl.rar

    PartSegCore_compiled_backend-0.12.0a0-cp36-cp36m-win_amd64.whl.rar

    planar-0.4-cp39-cp39-win_amd64.whl.rar

    python whl离线安装包 pip安装失败可以尝试使用whl离线安装包安装 第一步 下载whl文件,注意需要与python版本配套 python版本号、32位64位、arm或amd64均有区别 第二步 使用pip install XXXXX.whl 命令安装,如果whl路径不在cmd窗口当前目录下,需要带上路径 WHL文件是以Wheel格式保存的Python安装包, Wheel是Python发行版的标准内置包格式。 在本质上是一个压缩包,WHL文件中包含了Python安装的py文件和元数据,以及经过编译的pyd文件, 这样就使得它可以在不具备编译环境的条件下,安装适合自己python版本的库文件。 如果要查看WHL文件的内容,可以把.whl后缀名改成.zip,使用解压软件(如WinRAR、WinZIP)解压打开即可查看。 为什么会用到whl文件来安装python库文件呢? 在python的使用过程中,我们免不了要经常通过pip来安装自己所需要的包, 大部分的包基本都能正常安装,但是总会遇到有那么一些包因为各种各样的问题导致安装不了的。 这时我们就可以通过尝试去Python安装包大全中(whl包下载)下载whl包来安装解决问题。

    中国企业统计年鉴全集(1990-2020,除1997年).zip

    中国企业统计年鉴全集(1990-2020,除1997年).zip

    oursql-0.9.4-cp34-none-win32.whl.rar

    PartSegCore_compiled_backend-0.12.0a0-cp36-cp36m-win_amd64.whl.rar

    yolo算法-交易是项目数据集-760张图像带标签-.zip

    yolo系列算法目标检测数据集,包含标签,可以直接训练模型和验证测试,数据集已经划分好,包含数据集配置文件data.yaml,适用yolov5,yolov8,yolov9,yolov7,yolov10,yolo11算法; 包含两种标签格:yolo格式(txt文件)和voc格式(xml文件),分别保存在两个文件夹中; yolo格式:<class> <x_center> <y_center> <width> <height>, 其中: <class> 是目标的类别索引(从0开始)。 <x_center> 和 <y_center> 是目标框中心点的x和y坐标,这些坐标是相对于图像宽度和高度的比例值,范围在0到1之间。 <width> 和 <height> 是目标框的宽度和高度,也是相对于图像宽度和高度的比例值

    【java毕业设计】中国古诗词学习平台源码(ssm+mysql+说明文档).zip

    环境说明: 开发语言:Java 框架:ssm,mybatis JDK版本:JDK1.8 数据库:mysql 5.7及以上 数据库工具:Navicat11及以上 开发软件:eclipse/idea Maven包:Maven3.3及以上 服务器:tomcat7及以上

    汽车功放电路设计PCB案例

    内含PCB设计案例,可直接打样出成果

    Pillow_SIMD-6.0.0.post0-cp27-cp27m-win32.whl.rar

    python whl离线安装包 pip安装失败可以尝试使用whl离线安装包安装 第一步 下载whl文件,注意需要与python版本配套 python版本号、32位64位、arm或amd64均有区别 第二步 使用pip install XXXXX.whl 命令安装,如果whl路径不在cmd窗口当前目录下,需要带上路径 WHL文件是以Wheel格式保存的Python安装包, Wheel是Python发行版的标准内置包格式。 在本质上是一个压缩包,WHL文件中包含了Python安装的py文件和元数据,以及经过编译的pyd文件, 这样就使得它可以在不具备编译环境的条件下,安装适合自己python版本的库文件。 如果要查看WHL文件的内容,可以把.whl后缀名改成.zip,使用解压软件(如WinRAR、WinZIP)解压打开即可查看。 为什么会用到whl文件来安装python库文件呢? 在python的使用过程中,我们免不了要经常通过pip来安装自己所需要的包, 大部分的包基本都能正常安装,但是总会遇到有那么一些包因为各种各样的问题导致安装不了的。 这时我们就可以通过尝试去Python安装包大全中(whl包下载)下载whl包来安装解决问题。

    polylearn-0.1.dev0-cp27-cp27m-win32.whl.rar

    python whl离线安装包 pip安装失败可以尝试使用whl离线安装包安装 第一步 下载whl文件,注意需要与python版本配套 python版本号、32位64位、arm或amd64均有区别 第二步 使用pip install XXXXX.whl 命令安装,如果whl路径不在cmd窗口当前目录下,需要带上路径 WHL文件是以Wheel格式保存的Python安装包, Wheel是Python发行版的标准内置包格式。 在本质上是一个压缩包,WHL文件中包含了Python安装的py文件和元数据,以及经过编译的pyd文件, 这样就使得它可以在不具备编译环境的条件下,安装适合自己python版本的库文件。 如果要查看WHL文件的内容,可以把.whl后缀名改成.zip,使用解压软件(如WinRAR、WinZIP)解压打开即可查看。 为什么会用到whl文件来安装python库文件呢? 在python的使用过程中,我们免不了要经常通过pip来安装自己所需要的包, 大部分的包基本都能正常安装,但是总会遇到有那么一些包因为各种各样的问题导致安装不了的。 这时我们就可以通过尝试去Python安装包大全中(whl包下载)下载whl包来安装解决问题。

    中国各省市进出口面板数据集.zip

    中国各省市进出口面板数据集.zip

    【java毕业设计】大学生社团管理系统源码(ssm+mysql+说明文档).zip

    环境说明: 开发语言:Java 框架:ssm,mybatis JDK版本:JDK1.8 数据库:mysql 5.7及以上 数据库工具:Navicat11及以上 开发软件:eclipse/idea Maven包:Maven3.3及以上 服务器:tomcat7及以上

    appium test for self

    appium test for self 1. env create 2.coding

Global site tag (gtag.js) - Google Analytics