- 浏览: 28498 次
- 性别:
- 来自: 广州
最新评论
2 The Eight Myths of Erlang Performance
项目现在终于有点空闲时间了,看看erlang doc,觉得不错,就自己尝试看看,怕不是很理解就翻译出来。自己总结,感觉点很多,要成一个系统的去学习去不断的尝试才能理解这些原则和设计。慢慢去尝试吧。
注:【】内是我自己加的哦】
Here we try to kill the old truths (or semi-truths) that have become myths.
现在我们来粉碎那一些已是神话(过时)的真理或半真理吧。
2.1 Myth: Funs are slow
2.1 funs非常慢【我认为:错60%对40%比MFA还是慢的多,在频繁调用里用MFA好】
2.2 Myth: List comprehensions are slow
2.2 列表解析很慢【错】
2.3 Myth: Tail-recursive functions are MUCH faster than recursive functions
2.3 尾递归比递归快得多【错50%对50%】
So which is faster?
那么。那一个更快?
2.5 "++"总是非常坏的一种选择【错50%对50%】
On the other hand, using ++ like this
但是另一方面:下面这种用法就好了:
OK
Of course, experienced Erlang programmers would actually write
当然,最佳实践还是下面这种:
DO
2.5 Myth: Strings are slow
2.5 字符串很慢【对70%错30%,如果你选择错了,那就会非常痛苦】
2.6 Myth: Repairing a Dets file is very slow
2.6 修复一个dets文件是非常慢【对90%】
2.7 Myth: BEAM is a stack-based byte-code virtual machine (and therefore slow)
2.7 Beam文件是一个基于堆栈的字节虚拟机(所以很慢)【错100%】
2.8 Myth: Use '_' to speed up your program when a variable is not used
2.8 在一个不使用的变量时使用 "_"来加速效率【错】
新一代的影帝!!!
项目现在终于有点空闲时间了,看看erlang doc,觉得不错,就自己尝试看看,怕不是很理解就翻译出来。自己总结,感觉点很多,要成一个系统的去学习去不断的尝试才能理解这些原则和设计。慢慢去尝试吧。
注:【】内是我自己加的哦】
- Some truths seem to live on well beyond their best-before date, perhaps because "information" spreads more rapidly from person-to-person faster than a single release note that notes, for instance, that funs have become faster.
- 一些真相似乎已还是如同从前那么真,因为人人口头留传远比版本上说明快得多,(但不是口口相传并不能记录所有的改变),比如说:匿名函数fun变得更快。
Here we try to kill the old truths (or semi-truths) that have become myths.
现在我们来粉碎那一些已是神话(过时)的真理或半真理吧。
2.1 Myth: Funs are slow
2.1 funs非常慢【我认为:错60%对40%比MFA还是慢的多,在频繁调用里用MFA好】
- Yes, funs used to be slow. Very slow. Slower than apply/3. Originally, funs were implemented using nothing more than compiler trickery, ordinary tuples, apply/3, and a great deal of ingenuity.
- 是的,以前funs是慢,非常慢,比apply/3还慢,是因为最开始时,funs只要为了通过编译(我的理解设计来让编译器不报错,好解释一点),通过最原始的元组,apply/3,和其它的一些新东西来实现的。【由apply/3来实现,固然比apply/3慢一点啦】。
- But that is ancient history. Funs was given its own data type in the R6B release and was further optimized in the R7B release. Now the cost for a fun call falls roughly between the cost for a call to local function and apply/3.
- 但是那已是非常久远的历史了,funs在R6B release(注意不是R16B...)已是一种数据结构了,同时在R7B也做了一些优化,现在fun 的消耗在本地调用和apply/3之间
- 附:目前比较准确的消耗统计:
2.2 Myth: List comprehensions are slow
2.2 列表解析很慢【错】
- List comprehensions used to be implemented using funs, and in the bad old days funs were really slow.
- 列表解析以前用funs实现的,所以在那funs慢的时代自然也很慢啦。
- Nowadays the compiler rewrites list comprehensions into an ordinary recursive function. Of course, using a tail-recursive function with a reverse at the end would be still faster. Or would it? That leads us to the next myth.
- 现如今,编译器用递归重写了列表解析,当然用一个尾递归和一个lists:reverse来实现依然是非常快的。直的嘛,让我们来看看下一个神话。【所以不要用lists:foldr/3而是用lists:foldl/3+reverse/1来做好的多】
2.3 Myth: Tail-recursive functions are MUCH faster than recursive functions
2.3 尾递归比递归快得多【错50%对50%】
- According to the myth, recursive functions leave references to dead terms on the stack and the garbage collector will have to copy all those dead terms, while tail-recursive functions immediately discard those terms.
- 根据尾递归比递归快得多的神话,递归函数把不需要的terms留在栈中并让垃圾回收器来copy这些不需要的数据,而尾递归则是直接把不需要的terms丢弃
- That used to be true before R7B. In R7B, the compiler started to generate code that overwrites references to terms that will never be used with an empty list, so that the garbage collector would not keep dead values any longer than necessary.
- 这在R7B运行是这样的,在R7B,编译器开始用一个空列表来覆盖那些不需要的terms并用这空间来写代码,所以垃圾回收不用再保存那些不必要的值了。
- Even after that optimization, a tail-recursive function would still most of the time be faster than a body-recursive function. Why?
- 那么经过这些优化,一个尾递归是不是还是比一个body-recursive function快一点呢?
- It has to do with how many words of stack that are used in each recursive call. In most cases, a recursive function would use more words on the stack for each recursion than the number of words a tail-recursive would allocate on the heap. Since more memory is used, the garbage collector will be invoked more frequently, and it will have more work traversing the stack.
- 这要看有多少栈空间被用于递归,在大多数情况下,一个递归在栈上使用的空间会比尾递归分在堆上的空间多的多,空间多,自然会触发垃圾回收器更加频繁地遍历这些栈。
- In R12B and later releases, there is an optimization that will in many cases reduces the number of words used on the stack in body-recursive calls, so that a body-recursive list function and tail-recursive function that calls lists:reverse/1 at the end will use exactly the same amount of memory. lists:map/2, lists:filter/2, list comprehensions, and many other recursive functions now use the same amount of space as their tail-recursive equivalents.
- 在R12B及以后的版本中,还有一个优化:使用一个body-recursive 的空间消耗和使用尾递归+lists:reverse/1一样的。list:map/2,lists:filter/3.列表解析及其它的recursive function使用的空间和尾递归是几乎相同的。
So which is faster?
那么。那一个更快?
- It depends. On Solaris/Sparc, the body-recursive function seems to be slightly faster, even for lists with very many elements. On the x86 architecture, tail-recursion was up to about 30 percent faster.
- 这要看情况:在Solaris/Sparc,body-recursive 函数似乎快一点,即使对于一个是非常长的list,在x86结构里,尾递归比body-recursive快30%。
- So the choice is now mostly a matter of taste. If you really do need the utmost speed, you must measure. You can no longer be absolutely sure that the tail-recursive list function will be the fastest in all circumstances.
- 所以现在选择使用那一个,完全是一种个人喜好,如果你真的非常在意速度,你必须要权衡一下,现在请不要相信尾递归在所有的环境下是最快了。
- Note: A tail-recursive function that does not need to reverse the list at the end is, of course, faster than a body-recursive function, as are tail-recursive functions that do not construct any terms at all (for instance, a function that sums all integers in a list).
- Note:一个不需要反转的尾递归绝对比body-recursive function快的!因为尾递归真的是不产生任何多余的数据项,(以上说的消耗来自于lists:reverse/1)
2.5 "++"总是非常坏的一种选择【错50%对50%】
- The ++ operator has, somewhat undeservedly, got a very bad reputation. It probably has something to do with code like
- ”++“ 一直有一个非常不好的声誉,可能原因:你的用法不正确,比如:
naive_reverse([H|T]) -> naive_reverse(T)++[H]; naive_reverse([]) -> [].
- which is the most inefficient way there is to reverse a list. Since the ++ operator copies its left operand, the result will be copied again and again and again... leading to quadratic complexity.
- 这是最没效率去反转一个list,”++“会复制左边的元素,这个会使复制多次,导致平方倍的复杂度。
On the other hand, using ++ like this
但是另一方面:下面这种用法就好了:
OK
naive_but_ok_reverse([H|T], Acc) -> naive_but_ok_reverse(T, [H]++Acc); naive_but_ok_reverse([], Acc) -> Acc.
- is not bad. Each list element will only be copied once. The growing result Acc is the right operand for the ++ operator, and it will not be copied.
- 这并不是一个是非常坏的尝试,每个列表元素只被copy一次,那增长的Acc是在++的右边的,他不会每次都被copy的
Of course, experienced Erlang programmers would actually write
当然,最佳实践还是下面这种:
DO
vanilla_reverse([H|T], Acc) -> vanilla_reverse(T, [H|Acc]); vanilla_reverse([], Acc) -> Acc.
- which is slightly more efficient because you don't build a list element only to directly copy it. (Or it would be more efficient if the the compiler did not automatically rewrite [H]++Acc to [H|Acc].)
- 这比上面的还要高效一点点,你根本不用去建造一个list元素,直接copy他就可以了(或者:如果编译器不把[H]++Acc重写为[H|Acc] ,那就更高效啦)。
2.5 Myth: Strings are slow
2.5 字符串很慢【对70%错30%,如果你选择错了,那就会非常痛苦】
- Actually, string handling could be slow if done improperly. In Erlang, you'll have to think a little more about how the strings are used and choose an appropriate representation and use the re module instead of the obsolete regexp module if you are going to use regular expressions.
2.6 Myth: Repairing a Dets file is very slow
2.6 修复一个dets文件是非常慢【对90%】
- The repair time is still proportional to the number of records in the file, but Dets repairs used to be much, much slower in the past. Dets has been massively rewritten and improved.
- 修复仍然取决于文件中reocord的数量,但是比dets重写后比以前快多了。
2.7 Myth: BEAM is a stack-based byte-code virtual machine (and therefore slow)
2.7 Beam文件是一个基于堆栈的字节虚拟机(所以很慢)【错100%】
- BEAM is a register-based virtual machine. It has 1024 virtual registers that are used for holding temporary values and for passing arguments when calling functions. Variables that need to survive a function call are saved to the stack.
- BEAM 是一个基于寄存器的虚拟机,它有1024的虚拟寄存器来保存临时变量和使用函数。在函数调用后存放的变量放在栈中。
- BEAM is a threaded-code interpreter. Each instruction is word pointing directly to executable C-code, making instruction dispatching very fast.
- BEAM 是一个threaded-code 翻译器,每一个指令都指向对应的c代码,这使得它的调试快的一B。
2.8 Myth: Use '_' to speed up your program when a variable is not used
2.8 在一个不使用的变量时使用 "_"来加速效率【错】
- That was once true, but since R6B the BEAM compiler is quite capable of seeing itself that a variable is not used.
- 这以前是真的,但是自从R6B【为什么讲的都是那么老的版本,感觉这文档是不是过时了呀】,编译器可以自己判断变量是不是没有使用,而做相应的优化。
新一代的影帝!!!
发表评论
-
Erlang 简单的节点互连
2014-03-19 23:41 552自己写的游戏跨服初步构架,以后再一点点完善,先记下时间线哈。 ... -
简单erlang节点互连实验
2014-03-10 15:53 761如果erlang:节点test1,test2互连接: 1.节点 ... -
Erlang OTP gen_event (1)
2014-02-26 15:06 1063演示gen_event的运行过程: mod_event_ma ... -
Erlang OTP gen_event (0)
2014-02-26 14:30 1191原英文文档:http://www.erlang.org/erl ... -
erlang efficient guide 3
2013-08-19 22:19 1111* 3 Common Caveats * 3常见 ... -
fun还是如以前一样狂跩吊么?
2013-08-16 22:26 611fun这么好用。为什么老大在最近都说不要用? gen:cal ... -
emacs 的erlang-flymake
2013-08-14 15:15 1462emacs 设置erlang-flymake erlang- ... -
erlang ets
2013-07-22 23:08 1858参见:http://www.cnblogs.com ... -
erlang的编程规范
2013-03-26 17:17 1945Programming Rules and Conventio ... -
Erlang------Mnesia
2013-03-25 12:49 1704读erlang编程指南Mnesia笔记: 1.mnesia 是 ... -
进程环
2013-03-18 16:48 818编写一个程序,它生成N ... -
匿名函数fun
2012-12-15 16:12 812lists:map(fun/1,[1,2,3]). 小试匿名函 ... -
并发编程实战otp--open telecom platform 二
2012-10-10 23:17 1148第二章:Erlang语言精要。 shell 的启动参数h ... -
并发编程实战otp--open telecom platform 一
2012-10-10 23:16 12131.erlang 的进程模型: 并发的基本单位是进程, ... -
learn some erlang
2012-10-09 22:54 717Erlang has this very pragm ... -
erlang语句块的简洁使用
2012-09-25 22:48 677begin end语句块的简洁使用:问题描述:将一堆人 ... -
命令式编程语言的标杆
2012-09-25 09:47 685命令式编程语言的标杆: 1.进程必须是语言的核心; 2.任何进 ... -
emacs与erlang的完美结合
2012-09-23 22:48 12861.在emacs中使用c+x c+z 启动erlang she ...
相关推荐
### 效率指南(Erlang) #### 1.1 引言 ##### 1.1.1 目的 本指南旨在帮助Erlang程序员理解如何编写高效、清晰且结构良好的代码。正如Donald E. Knuth所言,“过早优化是一切罪恶之源”,我们强调应该首先关注代码...
win64位系统 。 erlang24.2.2。
Joe的那篇erlang论文 Programming Erlang + 源码包 Erlang Programming Concurrent Programming in Erlang efficiecy guide 资源齐全.希望能帮到你.
Erlang是一种面向并发的、函数式编程语言,由瑞典电信设备制造商Ericsson为了实现分布式实时、高可靠性系统而开发。Erlang以其强大的并行处理能力、容错性和易于构建大规模分布式系统的特点,在电信、金融和互联网等...
### 2. 并发与轻量级进程 Erlang的一大亮点是其内置的并发机制。它使用轻量级进程(Lightweight Processes, LWP)来实现并发,每个进程有自己的消息队列,通过消息传递进行通信。这种模型降低了同步的复杂性,提高...
2. **API更新**:可能对Erlang的内置函数或模块进行增强,提供新的功能或修复已知问题。 3. **兼容性提升**:与先前版本相比,25.0可能增强了与其他软件或框架的兼容性。 4. **错误修复**:解决上一版本中的已知问题...
Erlang是一种面向并发的、函数式编程语言,由瑞典电信设备制造商Ericsson开发,主要用于构建高可用性、分布式和实时系统。版本24.3.4.4是Erlang的一个更新版本,包含了对先前版本的改进和修复。Erlang以其强大的错误...
2. **容错性**:Erlang的错误恢复机制和热代码升级功能使得Erlang程序能在出现错误时优雅地重启,而不影响整个系统。这对于保持RabbitMQ的稳定性至关重要。 3. **分布式特性**:Erlang的分布式特性使得构建分布式...
Erlang/OTP 26.2.1,Erlang,OTP,26.2.1
2. **并发编程**:Erlang的并发模型是其独特之处。它通过轻量级进程(Lightweight Processes, LWP)实现并发,进程间通信(Inter-Process Communication, IPC)主要依赖消息传递。Concurrent Programming in ERLANG...
2. ** OTP(Open Telecom Platform)**:OTP是Erlang生态系统的一部分,提供了设计模式、库和工具,用于构建可靠和可扩展的系统。书中可能讲解了OTP的设计原则和组件,如GenServer、GenEvent和Supervisor等。 3. **...
erlang otp25 win安装包
2. 使用代理:如果你在公司或特定区域,可能需要通过代理服务器访问外部资源。 3. 更换下载时间:避开网络高峰期进行下载。 4. 镜像站点:寻找Erlang的镜像站点,这些站点通常位于地理位置更近的地方,下载速度会更...
erlang发明者写的书。erlang/otp一种高可靠性的平台。
【Erlang程序设计(第2版)】是由Erlang之父Joe Armstrong撰写的一本经典著作,专注于介绍Erlang编程语言在并发、分布式和容错系统中的应用。本书适用于初学者和有一定经验的Erlang程序员。作者在书中讨论了如何利用...
Erlang B和Erlang C是电信领域中两种重要的流量模型,用于预测和分析通信系统中的呼叫处理能力和拥塞情况。这两个模型由丹麦工程师Agner Krarup Erlang在20世纪初提出,至今仍广泛应用于现代通信网络的设计与优化。 ...
2. **并发与并行**:Erlang的轻量级进程(称为Erlang进程)使得并发编程变得简单。每个进程有自己的堆栈和消息队列,通过消息传递进行通信,降低了资源消耗,增强了系统的健壮性。 3. **热代码更新**:Erlang支持...
2. **解压**:解压缩下载的文件到一个合适的目录,通常使用`tar`命令来完成。 ```bash tar -xvf Erlang_20.3linux.tar.gz ``` 3. **安装**:进入解压后的目录,然后运行安装脚本。 ```bash cd erlang/20.3 ...