- 浏览: 409542 次
最新评论
-
iunknown:
909601686 写道我理解下来,之所以使用奇数应该不仅仅是 ...
以两军问题为背景来演绎Basic Paxos -
909601686:
我理解下来,之所以使用奇数应该不仅仅是为了判断谁赢。而是这样就 ...
以两军问题为背景来演绎Basic Paxos -
feclipse:
you are my hero! 看了你的图示之后,我的理解 ...
以两军问题为背景来演绎Basic Paxos -
lcc0739:
相当好!通俗易懂,看了好几篇paxos只有你这个最深入浅出!
以两军问题为背景来演绎Basic Paxos -
iunknown:
tangfu 写道hi,问一下,博主提供的库与pb兼容么,比如 ...
一个轻量的 wire format 解释器(google protobuf 的二进制格式)
http://itsystementwicklung.de/pipermail/list-pysqlite/2008-July/000113.html
I would like to do this for similar reasons. I have a web application where
the application data will be organized in a sqlite database unique to each
user. The user will upload the database when he begins a session and
download it when he is done. I want to manage the data entirely in memory
because
(a) the database size will be relatively small (on the order of a few
megabytes),
(b) there will be few users simultaneously using the system,
(c) I want the data to expire when the user's session expires (which may
occur as a timeout), so I intend to store the sqlite database in the user's
server-side session.
(d) I don't want to have to find a place to store the data on the disk,
ensure that the permissions are correct, and remove files that have expired.
So, to be specific, I don't want to load/dump the in-memory database to/from
a file, but instead to/from a stream.
It appears the API doesn't support what I'm trying to do. Do you have any
suggestions on how I might go about saving an in-memory database or
initializing from a previously-saved database without touching the file
system?
I understand I could use the aforementioned file-based technique to
accomplish what I need, but that's a clumsy way to get at the raw data, when
it presumably already exists in memory.
I would like to do this for similar reasons. I have a web application where
the application data will be organized in a sqlite database unique to each
user. The user will upload the database when he begins a session and
download it when he is done. I want to manage the data entirely in memory
because
(a) the database size will be relatively small (on the order of a few
megabytes),
(b) there will be few users simultaneously using the system,
(c) I want the data to expire when the user's session expires (which may
occur as a timeout), so I intend to store the sqlite database in the user's
server-side session.
(d) I don't want to have to find a place to store the data on the disk,
ensure that the permissions are correct, and remove files that have expired.
So, to be specific, I don't want to load/dump the in-memory database to/from
a file, but instead to/from a stream.
It appears the API doesn't support what I'm trying to do. Do you have any
suggestions on how I might go about saving an in-memory database or
initializing from a previously-saved database without touching the file
system?
I understand I could use the aforementioned file-based technique to
accomplish what I need, but that's a clumsy way to get at the raw data, when
it presumably already exists in memory.
评论
5 楼
iunknown
2009-04-28
http://www.nabble.com/VFS-in-upcoming-3.5.0-td12441170.html
With a built in InMemory-VFS (and its new "Dump-Interface")
we could achieve many nice things in only one roundtrip.
At the serverside we could attach an empty InMemory-DB
(implicitely using the new MemVFS) to an already pooled
SQLite-Connection and perform a bunch of "Insert Into-Queries",
to create a small snapshot of e.g. a midsized "Master-Detail-
Scenario".
Now we could dump not only a "single-query-content" (as
with our Resultsets currently), but could write a complete,
related scenario (containing the prepared "InMemory-Tables")
to a ByteArray and transfer *this* to the client.
Now it would be great, if we could use the new MemVFS-
interface, to create *and* initialize a new InMemory-DB at
the clientside with the received ByteContent.
This way, we could perform related queries (Joins, Filters,
etc.) against the InMemory-DB (containing the midsize
Master-Detail-Set) without doing any extra-roundtrips
over the server.
With a built in InMemory-VFS (and its new "Dump-Interface")
we could achieve many nice things in only one roundtrip.
At the serverside we could attach an empty InMemory-DB
(implicitely using the new MemVFS) to an already pooled
SQLite-Connection and perform a bunch of "Insert Into-Queries",
to create a small snapshot of e.g. a midsized "Master-Detail-
Scenario".
Now we could dump not only a "single-query-content" (as
with our Resultsets currently), but could write a complete,
related scenario (containing the prepared "InMemory-Tables")
to a ByteArray and transfer *this* to the client.
Now it would be great, if we could use the new MemVFS-
interface, to create *and* initialize a new InMemory-DB at
the clientside with the received ByteContent.
This way, we could perform related queries (Joins, Filters,
etc.) against the InMemory-DB (containing the midsize
Master-Detail-Set) without doing any extra-roundtrips
over the server.
4 楼
iunknown
2009-04-27
http://stackoverflow.com/questions/128919/extreme-sharding-one-sqlite-database-per-user
Extreme Sharding: One SQLite Database Per User
I'm working on a web app that is somewhere between an email service and a social network. I feel it has the potential to grow really big in the future, so I'm concerned about scalability.
Instead of using one centralized MySQL/InnoDB database and then partitioning it when that time comes, I've decided to create a separate SQLite database for each active user: one active user per 'shard'.
That way backing up the database would be as easy as copying each user's small database file to a remote location once a day.
Scaling up will be as easy as adding extra hard disks to store the new files.
When the app grows beyond a single server I can link the servers together at the filesystem level using GlusterFS and run the app unchanged, or rig up a simple SQLite proxy system that will allow each server to manipulate sqlite files in adjacent servers.
Concurrency issues will be minimal because each HTTP request will only touch one or two database files at a time, out of thousands, and SQLite only blocks on reads anyway.
I'm betting that this approach will allow my app to scale gracefully and support lots of cool and unique features. Am I betting wrong? Am I missing anything?
UPDATE I decided to go with a less extreme solution, which is working fine so far. I'm using a fixed number of shards - 256 sqlite databases, to be precise. Each user is assigned and bound to a random shard by a simple hash function.
Most features of my app require access to just one or two shards per request, but there is one in particular that requires the execution of a simple query on 10 to 100 different shards out of 256, depending on the user. Tests indicate it would take about 0.02 seconds, or less, if all the data is cached in RAM. I think I can live with that!
UPDATE 2.0 I ported the app to MySQL/InnoDB and was able to get about the same performance for regular requests, but for that one request that requires shard walking, innodb is 4-5 times faster. For this reason, and other reason, I'm dropping this architecture, but I hope someone somewhere finds a use for it...thanks.
Extreme Sharding: One SQLite Database Per User
I'm working on a web app that is somewhere between an email service and a social network. I feel it has the potential to grow really big in the future, so I'm concerned about scalability.
Instead of using one centralized MySQL/InnoDB database and then partitioning it when that time comes, I've decided to create a separate SQLite database for each active user: one active user per 'shard'.
That way backing up the database would be as easy as copying each user's small database file to a remote location once a day.
Scaling up will be as easy as adding extra hard disks to store the new files.
When the app grows beyond a single server I can link the servers together at the filesystem level using GlusterFS and run the app unchanged, or rig up a simple SQLite proxy system that will allow each server to manipulate sqlite files in adjacent servers.
Concurrency issues will be minimal because each HTTP request will only touch one or two database files at a time, out of thousands, and SQLite only blocks on reads anyway.
I'm betting that this approach will allow my app to scale gracefully and support lots of cool and unique features. Am I betting wrong? Am I missing anything?
UPDATE I decided to go with a less extreme solution, which is working fine so far. I'm using a fixed number of shards - 256 sqlite databases, to be precise. Each user is assigned and bound to a random shard by a simple hash function.
Most features of my app require access to just one or two shards per request, but there is one in particular that requires the execution of a simple query on 10 to 100 different shards out of 256, depending on the user. Tests indicate it would take about 0.02 seconds, or less, if all the data is cached in RAM. I think I can live with that!
UPDATE 2.0 I ported the app to MySQL/InnoDB and was able to get about the same performance for regular requests, but for that one request that requires shard walking, innodb is 4-5 times faster. For this reason, and other reason, I'm dropping this architecture, but I hope someone somewhere finds a use for it...thanks.
3 楼
iunknown
2009-04-25
eXtremeDB的mco_db_save()函数方便地将内存库保存到磁盘上再通过mco_db_load()随时加载。在系统有 大量初始配置数据需要加载的情形下,mco_db_save()/mco_db_load()特别方便。
/* abstract write and read streams interfaces; stream handle is a pointer to implementation-specific data */ typedef int(*mco_stream_write)(void* stream_handle, const void* from, unsigned nbytes); typedef int(*mco_stream_read)(void* stream_handle, /*OUT*/void* to, unsigned max_nbytes); /* save db content to output stream */ MCO_RET mco_db_save(void* stream_handle, mco_stream_write output_stream_writer, mco_db_h db); /* create db instance and load content from input stream - instead of mco_db_open() */ MCO_RET mco_db_load(void* stream_handle, mco_stream_read input_stream_reader, const char* db_name, mco_dictionary_h dict, void* mem_ptr, mco_puint total_size);
2 楼
iunknown
2009-04-25
http://www.kordelle.de/sqlite/
Followed example shows how to use the patch. It also includes code to read the file back into database. Don't forget to add errorchecking to your code!
Followed example shows how to use the patch. It also includes code to read the file back into database. Don't forget to add errorchecking to your code!
sqlite3 *dbHandle; char *errmsg; /* open your memory database */ sqlite3_open(":memory:", &dbHandle); if (access("./file.db", R_OK) != -1) { /* attach diskfile to current memory database */ errCode = sqlite3_exec(dbHandle, "ATTACH DATABASE './file.db' AS oldDB", NULL, NULL, &errmsg); /* create "MYTABLE" in memory database as fullselect from saved table. content of old table will be copied too. */ errCode = sqlite3_exec(dbHandle, "CREATE TABLE MYTABLE AS SELECT * FROM oldDB.MYTABLE", NULL, NULL, &errmsg); /* close attached database */ errCode = sqlite3_exec(dbHandle, "DETACH DATABASE oldDB", NULL, NULL, &errmsg); } else { /* first run. create a new empty database */ errCode = sqlite3_exec(dbHandle, "CREATE TABLE MYTABLE(ID INTEGER, NAME VARCHAR(40))", NULL, NULL, &errmsg); } Now work with your database. /* before closing the database, save it to a file */ errCode = sqlite3_write_to_file(dbHandle, "./file.db"); /* close database */ sqlite3_close(dbHandle);
1 楼
iunknown
2009-04-25
http://thread.gmane.org/gmane.comp.db.sqlite.general/38584/focus=38650
Actually, the reason I asked for this is that I wanted to use SQLite in a
flash memory based file system.
Considering that the flash memory is a bit slower than a normal disk(also
the limitation of writing operations), I wanted to change SQLite in some way
that it can be more efficient and useful upon flash memory systems.
The first solution I thought of was to make a kind of a buffer below the
b-tree, but it was really difficult understanding the internal system of
SQLite. But I'm still working on it.
Second, I wanted SQLite to do all of its work in the memory. Yes this would
be dangerous if there are unexpected power-offs or crashes, but I am
assuming that there won't be such situations like them. Then I visited the
website and found the patch. It sure is too old, but I thought it would be
helpful if I tried it on mine.
Actually, the reason I asked for this is that I wanted to use SQLite in a
flash memory based file system.
Considering that the flash memory is a bit slower than a normal disk(also
the limitation of writing operations), I wanted to change SQLite in some way
that it can be more efficient and useful upon flash memory systems.
The first solution I thought of was to make a kind of a buffer below the
b-tree, but it was really difficult understanding the internal system of
SQLite. But I'm still working on it.
Second, I wanted SQLite to do all of its work in the memory. Yes this would
be dangerous if there are unexpected power-offs or crashes, but I am
assuming that there won't be such situations like them. Then I visited the
website and found the patch. It sure is too old, but I thought it would be
helpful if I tried it on mine.
发表评论
-
以两军问题为背景来演绎Basic Paxos
2015-09-28 10:28 19183背景 在计算机通信理 ... -
自动化解决编译依赖问题
2010-12-04 14:02 3731一个用 c++ 实现的系统,过于庞大,依赖很复杂,还要变化很频 ... -
调试堆栈错乱的问题
2010-07-15 00:57 2584如果没有简单的重现办法,或者只是偶尔出现的话,可以通过 gli ... -
让设置 suid 的程序也产生 coredump
2010-05-11 11:24 2643被 suid 的程序要产生 coredump 文件,还需要额外 ... -
[zz]What does anon mean for pmap?
2010-02-26 21:12 4928http://stackoverflow.com/questi ... -
[zz]Troubleshooting Memory Usage
2010-01-13 21:05 1243http://rimuhosting.com/howto/me ... -
Half-Sync/Half-Async 和 Leader/Follower 模式的实现代码
2009-12-27 13:33 3180把之前发表过的一篇文 ... -
vc6 设置 pre-build 步骤
2009-11-06 22:52 1486把 spdatapickle 移植到 win32 平台,使用 ... -
socat
2009-10-28 23:11 1353socat 类似 telnet ,但是支持多种协议。 比如要 ... -
一个用于快速定位 C/C++ 函数返回位置的 macro
2009-09-28 23:11 1901这个 macro 是这样的 #define return ... -
使用 gettimeofday 来测量执行时间存在的问题
2009-08-01 17:25 5336之前一直使用 cprof 来分 ... -
[zz]Tokyo Cabinet Table Engine
2009-07-05 22:26 1388Sounds impressive and cool. I w ... -
[zz]Tokyo Cabinet Observations
2009-05-05 17:41 1407http://parand.com/say/index.php ... -
SpamAssassin, spf, greylist, dnsbl 相关的 milter 编译安装
2009-04-05 14:15 1636首先,如果系统中没有 libmilter.a 库,那么下载 s ... -
milter client 的实现
2009-03-28 19:38 1213在浏览了 milter 的协议描述和具体的实现之后,决定自己动 ... -
[zz]break when opening certain file
2009-02-16 17:38 1155http://www.nabble.com/break-whe ... -
linux inotify 通知信息不包括 user 和 process 的原因
2008-10-09 16:59 1416http://groups.google.com/group/ ... -
在 SPServer 中集成 IOCP 和 SSL
2008-06-20 22:27 3392打算把 Windows 的 SSPI 集成到 SPServer ... -
Reasons for threading
2008-06-11 20:49 1232http://www.sagemath.org:9001/Gl ... -
集成 IOCP 到 Libevent
2008-06-01 14:36 9385集成 IOCP 到 Libevent 完整的代码在 http ...
相关推荐
Google引入了一些更改,其中包括默认启用Write-Ahead Logging(WAL)模式,这可能会导致一些开发者在尝试打开或操作数据库时遇到问题,如标题所示的“File opened that is not a database file file is encrypted....
SQLite Database Browser 是一款专为Mac OS设计的直观易用的数据库管理工具,它专注于处理SQLite数据库文件。SQLite是一种轻量级、自包含的数据库引擎,广泛应用于移动设备、嵌入式系统以及桌面应用中,因为它不需要...
从标题《SQLite Database System Design and Implementation》以及提供的部分内容来看,本书的核心知识点包括: 1. 关系型数据库管理系统(RDBMS)的基本概念:本书在开头部分回顾了关系型数据库的一些基础概念,...
这个压缩包"SQLite+Database+Browser.rar"很可能包含SQLite数据库管理和浏览工具,如SQLite Database Browser,这是一个图形用户界面(GUI)应用程序,用于创建、操作和管理SQLite数据库。 SQLite的核心特性包括: ...
网上介绍SQLite文件格式的文章并不少,但一般都是针对小文件:一个表,几条记录,两个页。本文准备一直分析到比较大的文件,至少B-tree和B+tree中得有内结点(就是说不能只有一个既是根又是叶的结点,就是说表中得...
SQLite Database Browser是一款强大的数据库管理工具,专为SQLite数据库系统设计,提供了一个直观且用户友好的图形界面,使得数据库的操作变得更加简单。它允许用户无需深入理解SQL语法就能进行数据表的创建、查询、...
通过以上策略,我们可以有效地在Android的多线程环境中管理SQLite数据库,防止“database locked”问题的发生。在实践中,应根据具体的应用场景和需求选择合适的解决方案,以保证应用的稳定性和性能。
《PyPI官网下载 | sqlite3-to-mysql-1.4.5.tar.gz——数据库迁移工具解析》 在Python的世界里,PyPI(Python Package Index)是开发者获取和分享开源软件包的重要平台。本文将深入探讨名为`sqlite3-to-mysql`的...
标题"SQLite+Database+Browser"暗示了这是一个工具,允许用户浏览和探索SQLite数据库的内容。这样的工具通常具有图形用户界面(GUI),使得非编程背景的用户也能方便地查看表结构、执行查询以及管理数据库对象。 ...
SQLite Database Browser 2.0 b1.app.zip 是一个压缩包,包含了SQLite Database Browser的特定版本——2.0 beta 1的应用程序。SQLite Database Browser是一款直观的图形用户界面(GUI)工具,专门设计用于管理和操作...
SQLite Database Browser 2.0 是一个专为mac用户设计的SQLite数据库管理工具,它提供了直观且易用的界面,使得数据库的浏览、管理和操作变得轻松。对于那些在Android开发中处理SQLite数据库的开发者来说,这个工具...
SQLite Database Browser是一款功能强大的、免费开源的SQLite数据库管理工具,专为Windows用户设计。它允许用户在无需深入了解SQL语法的情况下创建、编辑和管理SQLite数据库。SQLite本身是一个轻量级的、自包含的、...
"database is locked"错误是SQLite在遇到死锁情况时抛出的异常,意味着数据库当前处于锁定状态,无法进行预期的操作。本文将深入探讨SQLite死锁的原因、诊断方法以及解决策略。 **1. SQLite死锁原因** SQLite死锁...
SQLite Database Browser是一款专为IPHONE用户设计的数据库管理工具,主要功能是帮助用户查看、编辑和管理SQLite数据库。SQLite是一种轻量级的、开源的关系型数据库管理系统,广泛应用于移动设备和嵌入式系统中,...
在多线程环境下,由于并发操作不当,可能会出现“database is locked”(数据库被锁定)的错误,这通常涉及到SQLite的锁机制和事务处理。本文将深入探讨这个问题,并提供具体的解决方案。 一、SQLite锁机制 SQLite...
SQLite数据库编辑工具SQLiteDatabaseBrowser是一款强大的开源工具,专为管理和编辑SQLite数据库而设计。SQLite是一种轻量级的、自包含的、无服务器的、事务性的SQL数据库引擎,广泛应用于移动设备、嵌入式系统以及...
`SQLiteDatabaseBrowser`是一款专为SQLite数据库设计的轻量级图形用户界面(GUI)客户端工具。这个工具的主要目的是为了方便非技术用户以及开发者能够轻松地创建、查看、修改和管理SQLite数据库。SQLite本身是一种...
在"qt-sqlite-database"项目中,我们使用了SQLite数据库来存储和检索数据。为了实现与数据库的高效通信,遵循了分层架构的设计原则。分层架构是一种常见的软件设计模式,将复杂的应用程序逻辑分解为多个独立的层次,...
SQLite Database Browser是一款强大的开源工具,专门用于管理和操作SQLite数据库。SQLite是一种自包含、无服务器、零配置、事务型的SQL数据库引擎,广泛应用于嵌入式系统和个人计算机应用程序中,因为它轻量级且无需...
在"sqlite-database.rar"这个压缩包中,包含了两个关键文件:"database.c"和"database.h"。这两个文件很可能是开发者为了简化SQLite API的使用而编写的封装库。`database.c`通常包含具体的实现代码,它可能封装了...