7个数据库服务器,都运行CentOS 4,配置是2 Dual Opteron 270,16GB RAM,4x36GB 15K RPM SCSI Drives。一个是只写数据库,其余则是读写数据库,它们互相之间可以随时可以动态交换。
软件: HTTP请求需经过
pound servers,pound是一种代理服务器,它会选择一个web server来响应请求。slashdot一共有6个pound,一个是HTTPS加密访问模式(提供给订阅用户),5个都是标准的HTTP。web server使用Apache,数据库是MySQL。Slash 1.0是在2000年初完成的目前的最新版本是2.2.6。
___________________________________________________
http://meta.slashdot.org/article.pl?sid=07/10/22/145209
Today we have Part 2 in our exciting 2 part series about the infrastructure that powers Slashdot. Last week Uriah told us
all about the hardware powering the system. This week, Jamie McCarthy picks up the story and tells us about the software... from pound to memcached to mysql and more. Hit that link and read on.
<!-- ad position 6 --> <!-- DoubleClick Ad Tag -->
var ad6 = 'active';
//');
dfp_tile++;
//]]>
<!-- End DoubleClick Ad Tag -->
The software side of Slashdot takes over at the point where our load balancers -- described in Friday's hardware story -- hand off your incoming HTTP request to our pound servers.
Pound is a reverse proxy, which means it doesn't service the request itself, it just chooses which web server to hand it off to. We run 6 pounds, one for HTTPS traffic and the other 5 for regular HTTP. (Didn't know we support HTTPS, did ya? It's one of the perks for subscribers: you get to read Slashdot on the same webhead that admins use, which is always going to be responsive even during a crush of traffic -- because if it isn't, Rob's going to breathe down our necks!)
The pounds send traffic to one of the 16 apaches on our 16 webheads -- 15 regular, and the 1 HTTPS. Now, pound itself is so undemanding that we run it side-by-side with the apaches. The HTTPS pound handles SSL itself, handing off a plaintext HTTP request to its machine's apache, so the apache it redirects traffic to doesn't need mod_ssl compiled in. One less headache! Of our other 15 webheads, 5 also run a pound, not to distribute load but just for redundancy.
(Trivia: pound normally adds an X-Forwarded-For header, which Slash::Apache substitutes for the (internal) IP of pound itself. But sometimes if you use a proxy on the internet to do something bad, it will send us an X-Forwarded-For header too, which we use to try to track abuse. So we patched pound to insert a special X-Forward-Pound header, so it doesn't overwrite what may come from an abuser's proxy.)
The other 15 webheads are segregated by type. This segregation is mostly what pound is for. We have 2 webheads for static (.shtml) requests, 4 for the dynamic homepage, 6 for dynamic comment-delivery pages (comments, article, pollBooth.pl), and 3 for all other dynamic scripts (ajax, tags, bookmarks, firehose). We segregate partly so that if there's a performance problem or a DDoS on a specific page, the rest of the site will remain functional. We're constantly changing the code and this sets up "performance firewalls" for when us silly coders decide to write infinite loops.
But we also segregate for efficiency reasons like httpd-level caching, and MaxClients tuning. Our webhead bottleneck is CPU, not RAM. We run MaxClients that might seem absurdly low (5-15 for dynamic webheads, 25 for static) but our philosophy is if we're not turning over requests quickly anyway, something's wrong, and stacking up more requests won't help the CPU chew through them any faster.
All the webheads run the same software, which they mount from a /usr/local exported by a read-only NFS machine. Everyone I've ever met outside of this company gives an involuntary shudder when NFS is mentioned, and yet we haven't had any problems since shortly after it was set up (2002-ish). I attribute this to a combination of our brilliant sysadmins and the fact that we only export read-only. The backend task that writes to /usr/local (to update index.shtml every minute, for example) runs on the NFS server itself.
The apaches are versions 1.3, because there's never been a reason for us to switch to 2.0. We compile in mod_perl, and lingerd to free up RAM during delivery, but the only other nonstandard module we use is mod_auth_useragent to keep unfriendly bots away. Slash does make extensive use of each phase of the request loop (largely so we can send our 403's to out-of-control bots using a minimum of resources, and so your page is fully on its way while we write to the logging DB).
Slash, of course, is the open-source perl code that runs Slashdot. If you're thinking of playing around with it, grab a recent copy from CVS: it's been years since we got around to a tarball release. The various scripts that handle web requests access the database through Slash's SQL API, implemented on top of DBD::mysql (now maintained, incidentally, by one of the original Slash 1.0 coders) and of course DBI.pm. The most interesting parts of this layer might be:
(a) We don't use Apache::DBI. We use connect_cached, but actually our main connection cache is the global objects that hold the connections. Some small chunks of data are so frequently used that we keep them around in those objects.
(b) We almost never use statement handles. We have eleven ways of doing a SELECT and the differences are mostly how we massage the results into the perl data structure they return.
(c) We don't use placeholders. Originally because DBD::mysql didn't take advantage of them, and now because we think any speed increase in a reasonably-optimized web app should be a trivial payoff for non-self-documenting argument order. Discuss!
(d) We built in replication support. A database object requested as a reader picks a random slave to read from for the duration of your HTTP request (or the backend task). We can weight them manually, and we have a task that reweights them automatically. (If we do something stupid and wedge a slave's replication thread, every Slash process, across 17 machines, starts throttling back its connections to that machine within 10 seconds. This was originally written to handle slave DBs getting bogged down by load, but with our new faster DBs, that just never happens, so if a slave falls behind, one of us probably typed something dumb at the mysql> prompt.)
(e) We bolted on memcached support. Why bolted-on? Because back when we first tried memcached, we got a huge performance boost by caching our three big data types (users, stories, comment text) and we're pretty sure additional caching would provide minimal benefit at this point. Memcached's main use is to get and set data objects, and Slash doesn't really bottleneck that way.
Slash 1.0 was written way back in early 2000 with decent support for get and set methods to abstract objects out of a database (getDescriptions, subclassed _wheresql) -- but over the years we've only used them a few times. Most data types that are candidates to be objectified either are processed in large numbers (like tags and comments), in ways that would be difficult to do efficiently by subclassing, or have complicated table structures and pre- and post-processing (like users) that would make any generic objectification code pretty complicated. So most data access is done through get and set methods written custom for each data type, or, just as often, through methods that perform one specific update or select.
Overall, we're pretty happy with the database side of things. Most tables are fairly well normalized, not fully but mostly, and we've found this improves performance in most cases. Even on a fairly large site like Slashdot, with modern hardware and a little thinking ahead, we're able to push code and schema changes live quickly. Thanks to running multiple-master replication, we can keep the site fully live even during blocking queries like ALTER TABLE. After changes go live, we can find performance problem spots and optimize (which usually means caching, caching, caching, and occasionally multi-pass log processing for things like detecting abuse and picking users out of a hat who get mod points).
In fact, I'll go further than "pretty happy." Writing a database-backed web site has changed dramatically over the past seven years. The database used to be the bottleneck: centralized, hard to expand, slow. Now even a cheap DB server can run a pretty big site if you code defensively, and thanks to Moore's Law, memcached, and improvements in open-source database software, that part of the scaling issue isn't really a problem until you're practically the size of eBay. It's an exciting time to be coding web applications
相关推荐
SlashdotPostings 和(最重要的)其相关 URL 的基本元数据: 作者标题永久链接正文后 URL背景此 Slashdot-API 旨在为 Discuss-It 应用程序提供服务,并提供对 Slashdot 帖子的轻松访问和搜索。 Slashdot 没有官方 ...
)它能做什么安静的Slashdot Reader可以从Slashdot主页上删除菜单栏和广告,从而获得更安静的阅读体验。如何安装在Chrome中: 转到扩展页面勾选“开发人员模式” 点击“加载解压的扩展程序...” 导航到您下载/签出的...
Slashdot 数据集 社交网络数据集
因此,加州大学伯克利分校的研究人员Matt Welsh、David Culler和Eric Brewer提出了一种新的设计思路——阶段事件驱动架构(Staged Event-Driven Architecture,简称SEDA),旨在支持大规模并发的互联网服务,并简化...
推荐系统数据集
lotus_slashdot_DoS
这种方式不会对页面的其他部分造成影响,确保用户可以流畅地阅读和浏览 Slashdot 网站上的内容。同时,由于它只针对特定网站的广告,因此不会对整个网络的广告展示造成全局性影响。 安装"Slashdot Adwrap Remover-...
PageRank和BFS 数据集: soc-Slashdot0811 77,360个节点,905,468个边缘。 roadNet-CA 1,965,206节点,2,766,607边缘。 soc-LiveJournal1 4,847,571节点,68,993,773边。
6. antisamy-slashdot.xml:Slashdot是一个科技新闻网站,这个配置可能是为了适应其社区讨论区的需求,保持一定的内容安全标准。 学习和理解这些XML配置文件的内容,开发者可以定制自己的策略,根据应用场景调整...
2. **Slashdot** (http://slashdot.org/):Slashdot 是一个科技新闻和技术评论的网站,拥有很高的PageRank值(PR9)。在这里分享内容有助于吸引科技领域的专业人士关注。 3. **Digg** (http://digg.com/):Digg 是...
antisamy.xml,antisamy-anythinggoes.xml,antisamy-ebay.xml,antisamy-myspace.xml,antisamy-slashdot.xml,antisamy-tinymce.xml
Ross**(Web Developer, Slashdot Contributor):本书以其出色的技术深度、丰富的示例代码和易于接近的风格,是任何希望最大化利用 JavaScript 功能的 Web 开发者宝贵的资源。 #### 二、主要内容概述 - **第1...
担任站点总监,负责诸如SourceForge.net、Linux.com、Slashdot.org、freshmeat.net和ThinkGeek.com等网站的管理工作。 #### 四、书籍内容概览 《McGraw-Hill.Linux.The.Complete.Reference.6th.Edition.2008》涵盖...
这是一个小的 Perl 脚本,用于检查 http://slashdot.jp/ 中的新文章。 它从 slashdot.rdf 或 comments.pl 中挑选新文章,并通过电子邮件发布它们。 轻快:-)
例如,Amazon EC2和S3可以帮助应对流量激增(如“Slashdot/Digg/TechCrunch效应”)和季节性波动,确保服务始终可用。 【商业模式与优势】 AWS采用一套API和商业模型,允许开发者访问亚马逊的基础设施和内容。服务...
ASP Slashdot Headline Parser是一个简单的Active Server Page(ASP)脚本,它可以获取最新的slashdot.xml文件,对其进行解析并以HTML Table格式显示标题。
Slash::Gallery是一个基于开源软件项目Slash的图片库插件,主要设计用于管理和展示与Slashdot网站类似的社区平台上的图像资源。Slashdot是一款流行的开源新闻和讨论系统,它允许用户提交新闻、评论,并进行互动。...
Ross**(Web Developer、Slashdot Contributor):本书以其强大的技术深度、广泛的示例代码和友好的写作风格成为任何希望最大化利用JavaScript功能的Web开发者的宝贵资源。 #### 三、核心内容概览 1. **jQuery基础...