- 浏览: 1307555 次
- 性别:
- 来自: 江苏
最新评论
-
honey_fansy:
的确,不要自己的支持就说完美支持,我的就不行,别说我的不是fi ...
无js实现text-overflow: ellipsis; 完美支持Firefox -
fanchengfei:
事件长微博,欢迎转发:http://weibo.com/332 ...
《在路上 …》 写代码也需要一点演技 – python2.6 的 class decorator -
blued:
没有报错,但排版效果一点都没有 咋回事。请指教
python排版工具 -
szxiaoli:
耍人呀,效果在哪儿呀
滑动效果 -
accaolei:
这个能监到控子目录吗?,我测试了一下,发现子目录里的文件监控不 ...
windows监控目录改动
lrucache.py 最近最少使用算法
2009-05-04 13:22:36
网上找的
# lrucache.py -- a simple LRU (Least-Recently-Used) cache class
# Copyright 2004 Evan Prodromou <evan@bad.dynu.ca>
# Licensed under the Academic Free License 2.1
# Licensed for ftputil under the revised BSD license
# with permission by the author, Evan Prodromou. Many
# thanks, Evan! :-)
#
# The original file is available at
# http://pypi.python.org/pypi/lrucache/0.2 .
# arch-tag: LRU cache main module
"""a simple LRU (Least-Recently-Used) cache module
This module provides very simple LRU (Least-Recently-Used) cache
functionality.
An *in-memory cache* is useful for storing the results of an
'expe\nsive' process (one that takes a lot of time or resources) for
later re-use. Typical examples are accessing data from the filesystem,
a database, or a network location. If you know you'll need to re-read
the data again, it can help to keep it in a cache.
You *can* use a Python dictionary as a cache for some purposes.
However, if the results you're caching are large, or you have a lot of
possible results, this can be impractical memory-wise.
An *LRU cache*, on the other hand, only keeps _some_ of the results in
memory, which keeps you from overusing resources. The cache is bounded
by a maximum size; if you try to add more values to the cache, it will
automatically discard the values that you haven't read or written to
in the longest time. In other words, the least-recently-used items are
discarded. [1]_
.. [1]: 'Discarded' here means 'removed from the cache'.
"""
from __future__ import generators
import time
from heapq import heappush, heappop, heapify
# the suffix after the hyphen denotes modifications by the
# ftputil project with respect to the original version
__version__ = "0.2-1"
__all__ = ['CacheKeyError', 'LRUCache', 'DEFAULT_SIZE']
__docformat__ = 'reStructuredText en'
DEFAULT_SIZE = 16
"""Default size of a new LRUCache object, if no 'size' argument is given."""
class CacheKeyError(KeyError):
"""Error raised when cache requests fail
When a cache record is accessed which no longer exists (or never did),
this error is raised. To avoid it, you may want to check for the existence
of a cache record before reading or deleting it."""
pass
class LRUCache(object):
"""Least-Recently-Used (LRU) cache.
Instances of this class provide a least-recently-used (LRU) cache. They
emulate a Python mapping type. You can use an LRU cache more or less like
a Python dictionary, with the exception that objects you put into the
cache may be discarded before you take them out.
Some example usage::
cache = LRUCache(32) # new cache
cache['foo'] = get_file_contents('foo') # or whatever
if 'foo' in cache: # if it's still in cache...
# use cached version
contents = cache['foo']
else:
# recalculate
contents = get_file_contents('foo')
# store in cache for next time
cache['foo'] = contents
print cache.size # Maximum size
print len(cache) # 0 <= len(cache) <= cache.size
cache.size = 10 # Auto-shrink on size assignment
for i in range(50): # note: larger than cache size
cache[i] = i
if 0 not in cache: print 'Zero was discarded.'
if 42 in cache:
del cache[42] # Manual deletion
for j in cache: # iterate (in LRU order)
print j, cache[j] # iterator produces keys, not values
"""
class __Node(object):
"""Record of a cached value. Not for public consumption."""
def __init__(self, key, obj, timestamp, sort_key):
object.__init__(self)
self.key = key
self.obj = obj
self.atime = timestamp
self.mtime = self.atime
self._sort_key = sort_key
def __cmp__(self, other):
return cmp(self._sort_key, other._sort_key)
def __repr__(self):
return "<%s %s => %s (%s)>" % \
(self.__class__, self.key, self.obj, \
time.asctime(time.localtime(self.atime)))
def __init__(self, size=DEFAULT_SIZE):
# Check arguments
if size <= 0:
raise ValueError, size
elif type(size) is not type(0):
raise TypeError, size
object.__init__(self)
self.__heap = []
self.__dict = {}
"""Maximum size of the cache.
If more than 'size' elements are added to the cache,
the least-recently-used ones will be discarded."""
self.size = size
self.__counter = 0
def _sort_key(self):
"""Return a new integer value upon every call.
Cache nodes need a monotonically increasing time indicator.
time.time() and time.clock() don't guarantee this in a
platform-independent way.
"""
self.__counter += 1
return self.__counter
def __len__(self):
return len(self.__heap)
def __contains__(self, key):
return self.__dict.has_key(key)
def __setitem__(self, key, obj):
if self.__dict.has_key(key):
node = self.__dict[key]
# update node object in-place
node.obj = obj
node.atime = time.time()
node.mtime = node.atime
node._sort_key = self._sort_key()
heapify(self.__heap)
else:
# size may have been reset, so we loop
while len(self.__heap) >= self.size:
lru = heappop(self.__heap)
del self.__dict[lru.key]
node = self.__Node(key, obj, time.time(), self._sort_key())
self.__dict[key] = node
heappush(self.__heap, node)
def __getitem__(self, key):
if not self.__dict.has_key(key):
raise CacheKeyError(key)
else:
node = self.__dict[key]
# update node object in-place
node.atime = time.time()
node._sort_key = self._sort_key()
heapify(self.__heap)
return node.obj
def __delitem__(self, key):
if not self.__dict.has_key(key):
raise CacheKeyError(key)
else:
node = self.__dict[key]
del self.__dict[key]
self.__heap.remove(node)
heapify(self.__heap)
return node.obj
def __iter__(self):
copy = self.__heap[:]
while len(copy) > 0:
node = heappop(copy)
yield node.key
raise StopIteration
def __setattr__(self, name, value):
object.__setattr__(self, name, value)
# automagically shrink heap on resize
if name == 'size':
while len(self.__heap) > value:
lru = heappop(self.__heap)
del self.__dict[lru.key]
def __repr__(self):
return "<%s (%d elements)>" % (str(self.__class__), len(self.__heap))
def mtime(self, key):
"""Return the last modification time for the cache record with key.
May be useful for cache instances where the stored values can get
'stale', such as caching file or network resource contents."""
if not self.__dict.has_key(key):
raise CacheKeyError(key)
else:
node = self.__dict[key]
return node.mtime
if __name__ == "__main__":
cache = LRUCache(25)
print cache
for i in range(50):
cache[i] = str(i)
print cache
if 46 in cache:
print "46 in cache"
del cache[46]
print cache
cache.size = 10
print cache
cache[46] = '46'
print cache
print len(cache)
for c in cache:
print c
print cache
print cache.mtime(46)
for c in cache:
print c
2009-05-04 13:22:36
网上找的
# lrucache.py -- a simple LRU (Least-Recently-Used) cache class
# Copyright 2004 Evan Prodromou <evan@bad.dynu.ca>
# Licensed under the Academic Free License 2.1
# Licensed for ftputil under the revised BSD license
# with permission by the author, Evan Prodromou. Many
# thanks, Evan! :-)
#
# The original file is available at
# http://pypi.python.org/pypi/lrucache/0.2 .
# arch-tag: LRU cache main module
"""a simple LRU (Least-Recently-Used) cache module
This module provides very simple LRU (Least-Recently-Used) cache
functionality.
An *in-memory cache* is useful for storing the results of an
'expe\nsive' process (one that takes a lot of time or resources) for
later re-use. Typical examples are accessing data from the filesystem,
a database, or a network location. If you know you'll need to re-read
the data again, it can help to keep it in a cache.
You *can* use a Python dictionary as a cache for some purposes.
However, if the results you're caching are large, or you have a lot of
possible results, this can be impractical memory-wise.
An *LRU cache*, on the other hand, only keeps _some_ of the results in
memory, which keeps you from overusing resources. The cache is bounded
by a maximum size; if you try to add more values to the cache, it will
automatically discard the values that you haven't read or written to
in the longest time. In other words, the least-recently-used items are
discarded. [1]_
.. [1]: 'Discarded' here means 'removed from the cache'.
"""
from __future__ import generators
import time
from heapq import heappush, heappop, heapify
# the suffix after the hyphen denotes modifications by the
# ftputil project with respect to the original version
__version__ = "0.2-1"
__all__ = ['CacheKeyError', 'LRUCache', 'DEFAULT_SIZE']
__docformat__ = 'reStructuredText en'
DEFAULT_SIZE = 16
"""Default size of a new LRUCache object, if no 'size' argument is given."""
class CacheKeyError(KeyError):
"""Error raised when cache requests fail
When a cache record is accessed which no longer exists (or never did),
this error is raised. To avoid it, you may want to check for the existence
of a cache record before reading or deleting it."""
pass
class LRUCache(object):
"""Least-Recently-Used (LRU) cache.
Instances of this class provide a least-recently-used (LRU) cache. They
emulate a Python mapping type. You can use an LRU cache more or less like
a Python dictionary, with the exception that objects you put into the
cache may be discarded before you take them out.
Some example usage::
cache = LRUCache(32) # new cache
cache['foo'] = get_file_contents('foo') # or whatever
if 'foo' in cache: # if it's still in cache...
# use cached version
contents = cache['foo']
else:
# recalculate
contents = get_file_contents('foo')
# store in cache for next time
cache['foo'] = contents
print cache.size # Maximum size
print len(cache) # 0 <= len(cache) <= cache.size
cache.size = 10 # Auto-shrink on size assignment
for i in range(50): # note: larger than cache size
cache[i] = i
if 0 not in cache: print 'Zero was discarded.'
if 42 in cache:
del cache[42] # Manual deletion
for j in cache: # iterate (in LRU order)
print j, cache[j] # iterator produces keys, not values
"""
class __Node(object):
"""Record of a cached value. Not for public consumption."""
def __init__(self, key, obj, timestamp, sort_key):
object.__init__(self)
self.key = key
self.obj = obj
self.atime = timestamp
self.mtime = self.atime
self._sort_key = sort_key
def __cmp__(self, other):
return cmp(self._sort_key, other._sort_key)
def __repr__(self):
return "<%s %s => %s (%s)>" % \
(self.__class__, self.key, self.obj, \
time.asctime(time.localtime(self.atime)))
def __init__(self, size=DEFAULT_SIZE):
# Check arguments
if size <= 0:
raise ValueError, size
elif type(size) is not type(0):
raise TypeError, size
object.__init__(self)
self.__heap = []
self.__dict = {}
"""Maximum size of the cache.
If more than 'size' elements are added to the cache,
the least-recently-used ones will be discarded."""
self.size = size
self.__counter = 0
def _sort_key(self):
"""Return a new integer value upon every call.
Cache nodes need a monotonically increasing time indicator.
time.time() and time.clock() don't guarantee this in a
platform-independent way.
"""
self.__counter += 1
return self.__counter
def __len__(self):
return len(self.__heap)
def __contains__(self, key):
return self.__dict.has_key(key)
def __setitem__(self, key, obj):
if self.__dict.has_key(key):
node = self.__dict[key]
# update node object in-place
node.obj = obj
node.atime = time.time()
node.mtime = node.atime
node._sort_key = self._sort_key()
heapify(self.__heap)
else:
# size may have been reset, so we loop
while len(self.__heap) >= self.size:
lru = heappop(self.__heap)
del self.__dict[lru.key]
node = self.__Node(key, obj, time.time(), self._sort_key())
self.__dict[key] = node
heappush(self.__heap, node)
def __getitem__(self, key):
if not self.__dict.has_key(key):
raise CacheKeyError(key)
else:
node = self.__dict[key]
# update node object in-place
node.atime = time.time()
node._sort_key = self._sort_key()
heapify(self.__heap)
return node.obj
def __delitem__(self, key):
if not self.__dict.has_key(key):
raise CacheKeyError(key)
else:
node = self.__dict[key]
del self.__dict[key]
self.__heap.remove(node)
heapify(self.__heap)
return node.obj
def __iter__(self):
copy = self.__heap[:]
while len(copy) > 0:
node = heappop(copy)
yield node.key
raise StopIteration
def __setattr__(self, name, value):
object.__setattr__(self, name, value)
# automagically shrink heap on resize
if name == 'size':
while len(self.__heap) > value:
lru = heappop(self.__heap)
del self.__dict[lru.key]
def __repr__(self):
return "<%s (%d elements)>" % (str(self.__class__), len(self.__heap))
def mtime(self, key):
"""Return the last modification time for the cache record with key.
May be useful for cache instances where the stored values can get
'stale', such as caching file or network resource contents."""
if not self.__dict.has_key(key):
raise CacheKeyError(key)
else:
node = self.__dict[key]
return node.mtime
if __name__ == "__main__":
cache = LRUCache(25)
print cache
for i in range(50):
cache[i] = str(i)
print cache
if 46 in cache:
print "46 in cache"
del cache[46]
print cache
cache.size = 10
print cache
cache[46] = '46'
print cache
print len(cache)
for c in cache:
print c
print cache
print cache.mtime(46)
for c in cache:
print c
发表评论
-
关于"Google限制Python"事件我的看法
2009-11-17 15:11 8389本来做一个勤勤恳恳的 ... -
python排版工具
2009-10-15 14:22 3511http://pypi.python.org/pypi/pyt ... -
Fast Asynchronous Python Web Server (Fapws is short)
2009-08-15 12:12 1864http://github.com/william-os4y/ ... -
python奇技淫巧
2009-07-23 22:27 2514http://wiki.python.org/moin/By ... -
跨平台 获取系统信息的python库 http://support.hyperic.com/disp
2009-06-12 11:49 3648The Sigar API provides a portab ... -
频繁集统计 python 封装
2009-05-29 15:49 2665封装的是附件这篇paper的count 因为对比发现这个的综合 ... -
libsvm (python封装) 学习笔记 1
2009-05-19 14:28 42442009-05-19 14:10:38 #!/usr/bin ... -
lcs.py 最长公共子串算法
2009-05-05 15:50 2981感觉用来匹配相似文件比最短编辑距离更靠谱,最短编辑应该是用来纠 ... -
史上最快 异步消息队列zeromq 简介
2009-04-30 21:40 27277是的,我喜欢Z开头的东西. http://www.zer ... -
相似单词
2009-03-18 00:54 1775给你一个单词a,如果通过交换单词中字母的顺序可以得到另外的单词 ... -
is_cn_char
2009-03-14 13:39 1352unicode码 def is_cn_char(i): ... -
写一个python的urldecode
2009-03-03 10:57 5127from urllib import unquote def ... -
今天才发现python的sort有个key参数,我好圡...
2009-02-28 20:59 3106>>> a=range(10) >& ... -
发一个山寨版python的Orm
2009-02-24 23:49 2245发一个山寨版的Orm 大概用法见 http://docs. ... -
pyrex学习笔记
2009-02-24 03:36 17050. easy_install pyrex 1.写pyrex ... -
python的一个有趣的细节
2009-02-24 02:00 1375python3.0一个有趣的细节 2009-02-24 01: ... -
python备玩候选者
2009-02-24 00:34 1709* 张沈鹏 推荐网址当然要有一个部署的东西 Exs ... -
python读取mp3 ID3信息
2009-02-18 16:57 2655pyid3不好用,常常有不认识的. mutagen不错,不过 ... -
又写了一个python的route模块
2009-01-14 01:18 2103是的 我很无聊 -
mxTidy - HTML Tidy for Python
2009-01-05 10:50 5230抓取的html不处理一下很容易破坏页面的布局 官网的py ...
相关推荐
在Py3K里,自带一个cache模块,使用「LRU算法」,能够缓存一些函数或方法放返回值,目前我还在玩Py2K,因此土鳖的造了一个轮子,取名「LruCache.py」,不叫特点的特点:「单进程支持线程安全」 示例代码: import ...
堆排序13.py 使用python代码实现堆排序13.py 使用python代码实现堆排序13.py 使用python代码实现堆排序13.py 使用python代码实现堆排序13.py 使用python代码实现堆排序13.py 使用python代码实现堆排序13.py 使用...
粒子群优化算法.py 美赛必会算法,粒子群算法
web.py 中文手册 webpy coobookweb.py 中文手册 webpy coobookweb.py 中文手册 webpy coobookweb.py 中文手册 webpy coobookweb.py 中文手册 webpy coobookweb.py 中文手册 webpy coobookweb.py 中文手册 webpy ...
Python是世界上最受欢迎的编程语言之一,特别是在数据科学、机器学习和Web开发领域。在Python的生态系统中,`pip`是一个至关...通过下载和使用`get-pip.py`,你可以确保在Python 2.7环境中也能享受到`pip`带来的便利。
K-means聚类.py ACM比赛常用算法k-means算法
web.py 是一个轻量级Python web框架,它简单而且功能强大。web.py是一个开源项目。该框架由美国作家、Reddit联合创始人、RSS规格合作创造者、著名计算机黑客Aaron Swartz开发。web.py目前已被很多家大型网站所使用。
估计这个安装包还只兼容python 2(python2 和python3差别还是挺大的,虽然现在python 3出来很久了,但是不少三方库还没有更新),因此需要自己找一个兼容的包:python_docx-0.8.6-py2.py3-none-any.whl。然后在...
除了CTA回测,vn.py还支持其他功能,如算法交易、套利策略、风险管理等。这些功能的实现同样依赖于vn.py的源代码结构和编程技术。为了完全掌握vn.py,读者需要对Python基础、面向对象编程、数据分析、可视化以及GUI...
K-means.py K-mean算法代码,聚类常考代码,仅供学习使用
首先,让我们关注“py吃豆人”这一标签,这表明项目是使用Python语言编写的,用于实现吃豆人游戏的模拟。Python因其简洁易读的语法和丰富的库支持而被广泛用于教学和研究人工智能领域,尤其是作为初学者入门AI的首选...
了解这些基本概念后,你可以更高效地分析vn.py的其他功能,如风险管理、算法交易、实时监控等。尽管本文档并不全面,但它提供了一个很好的起点,帮助初学者逐步理解vn.py的源代码结构和工作原理。 需要注意的是,...
kmeans.py,python算法之Kmeans聚类分析
本文实例讲述了Python实现的最近最少使用算法。分享给大家供大家参考。具体如下: # lrucache.py -- a simple LRU (Least-Recently-Used) cache class # Copyright 2004 Evan Prodromou # Licensed under the ...
改进后的pyinstxtractor.py, 适合还原pyinstaller生成的exe,...使用说明: 将需提取的exe文件拖入pyinstxtractor.py, 执行成功后, 会生成一个exe_extracted 文件夹, 将其中的文件使用uncompyle6库反编译成py文件即可。
EVE内思科IOU的key . CiscoIOUKeygen.py CiscoIOUKeygen.py
HTMLTestReportCn.py (适用于 unittest 测试框架的第三方组件)
get-pip.py 安装pip
six-1.14.0-py2.py3-none-any.whl python 工具插件,应该好用,下载使用
五、使用`testCases.py`进行实践 1. 阅读并理解测试用例:每个测试用例都有明确的输入和期望输出,这有助于理解代码应该实现的功能。 2. 编写代码:根据课程指导,编写解决特定问题的代码。 3. 调试与优化:运行`...