`
isiqi
  • 浏览: 16351364 次
  • 性别: Icon_minigender_1
  • 来自: 济南
社区版块
存档分类
最新评论

slashdot网站架构:硬件和软件 zz

阅读更多
http://slash.solidot.org/article.pl?sid=07/10/27/1244202&from=rss
Solidot网站经常不时出现小毛小病,比如最近留言计数器严重滞后。同样采用slashcode的slashdot是如何运行的,值得我们参考。它的Alexa排名在800左右(digg现在是100左右,差距越来越大了),每天的流量很惊人。在建站10周年之际,Slashdot的工程师介绍了网站整体架构,分为硬件软件两部分。
硬件:slashdot现在属于SourceForge公司,硬件基本结构与SourceForge旗下其它网站如SourceForge.net,Thinkgeek.com, Freshmeat.net,Linux.com等相同。 一个数据中心,活动地板、发电机、UPS、24x7小时安全防护等等之类,和一般的数据中心一样。
带宽和网络:一对Cisco 7301s路由器,一对Foundry BigIron 8000s交换机,一对Rackable Systems 1Us作负载平衡防火墙:配置P4 Xeon 2.66Gz,2G RAM,2x80GB IDE,运行CentOS和LVS。
16个web服务器,都运行Red Hat 9。2个用于统计内容:脚本,图像,非注册用户看到的首页;4个用于注册用户看到的首页内容;其余10个处理评论页。服务器型号为Rackable 1U:2 Xeon 2.66Ghz处理器,2GB of RAM,2x80GB IDE硬盘...
7个数据库服务器,都运行CentOS 4,配置是2 Dual Opteron 270,16GB RAM,4x36GB 15K RPM SCSI Drives。一个是只写数据库,其余则是读写数据库,它们互相之间可以随时可以动态交换。

软件: HTTP请求需经过pound servers,pound是一种代理服务器,它会选择一个web server来响应请求。slashdot一共有6个pound,一个是HTTPS加密访问模式(提供给订阅用户),5个都是标准的HTTP。web server使用Apache,数据库是MySQL。Slash 1.0是在2000年初完成的目前的最新版本是2.2.6。
___________________________________________________
http://meta.slashdot.org/article.pl?sid=07/10/22/145209
Today we have Part 2 in our exciting 2 part series about the infrastructure that powers Slashdot. Last week Uriah told us all about the hardware powering the system. This week, Jamie McCarthy picks up the story and tells us about the software... from pound to memcached to mysql and more. Hit that link and read on.
<!-- ad position 6 --> <!-- DoubleClick Ad Tag --> var ad6 = 'active';
//'); dfp_tile++; //]]>
<!-- End DoubleClick Ad Tag -->

The software side of Slashdot takes over at the point where our load balancers -- described in Friday's hardware story -- hand off your incoming HTTP request to our pound servers.

Pound is a reverse proxy, which means it doesn't service the request itself, it just chooses which web server to hand it off to. We run 6 pounds, one for HTTPS traffic and the other 5 for regular HTTP. (Didn't know we support HTTPS, did ya? It's one of the perks for subscribers: you get to read Slashdot on the same webhead that admins use, which is always going to be responsive even during a crush of traffic -- because if it isn't, Rob's going to breathe down our necks!)

The pounds send traffic to one of the 16 apaches on our 16 webheads -- 15 regular, and the 1 HTTPS. Now, pound itself is so undemanding that we run it side-by-side with the apaches. The HTTPS pound handles SSL itself, handing off a plaintext HTTP request to its machine's apache, so the apache it redirects traffic to doesn't need mod_ssl compiled in. One less headache! Of our other 15 webheads, 5 also run a pound, not to distribute load but just for redundancy.

(Trivia: pound normally adds an X-Forwarded-For header, which Slash::Apache substitutes for the (internal) IP of pound itself. But sometimes if you use a proxy on the internet to do something bad, it will send us an X-Forwarded-For header too, which we use to try to track abuse. So we patched pound to insert a special X-Forward-Pound header, so it doesn't overwrite what may come from an abuser's proxy.)

The other 15 webheads are segregated by type. This segregation is mostly what pound is for. We have 2 webheads for static (.shtml) requests, 4 for the dynamic homepage, 6 for dynamic comment-delivery pages (comments, article, pollBooth.pl), and 3 for all other dynamic scripts (ajax, tags, bookmarks, firehose). We segregate partly so that if there's a performance problem or a DDoS on a specific page, the rest of the site will remain functional. We're constantly changing the code and this sets up "performance firewalls" for when us silly coders decide to write infinite loops.

But we also segregate for efficiency reasons like httpd-level caching, and MaxClients tuning. Our webhead bottleneck is CPU, not RAM. We run MaxClients that might seem absurdly low (5-15 for dynamic webheads, 25 for static) but our philosophy is if we're not turning over requests quickly anyway, something's wrong, and stacking up more requests won't help the CPU chew through them any faster.

All the webheads run the same software, which they mount from a /usr/local exported by a read-only NFS machine. Everyone I've ever met outside of this company gives an involuntary shudder when NFS is mentioned, and yet we haven't had any problems since shortly after it was set up (2002-ish). I attribute this to a combination of our brilliant sysadmins and the fact that we only export read-only. The backend task that writes to /usr/local (to update index.shtml every minute, for example) runs on the NFS server itself.

The apaches are versions 1.3, because there's never been a reason for us to switch to 2.0. We compile in mod_perl, and lingerd to free up RAM during delivery, but the only other nonstandard module we use is mod_auth_useragent to keep unfriendly bots away. Slash does make extensive use of each phase of the request loop (largely so we can send our 403's to out-of-control bots using a minimum of resources, and so your page is fully on its way while we write to the logging DB).

Slash, of course, is the open-source perl code that runs Slashdot. If you're thinking of playing around with it, grab a recent copy from CVS: it's been years since we got around to a tarball release. The various scripts that handle web requests access the database through Slash's SQL API, implemented on top of DBD::mysql (now maintained, incidentally, by one of the original Slash 1.0 coders) and of course DBI.pm. The most interesting parts of this layer might be:

(a) We don't use Apache::DBI. We use connect_cached, but actually our main connection cache is the global objects that hold the connections. Some small chunks of data are so frequently used that we keep them around in those objects.

(b) We almost never use statement handles. We have eleven ways of doing a SELECT and the differences are mostly how we massage the results into the perl data structure they return.

(c) We don't use placeholders. Originally because DBD::mysql didn't take advantage of them, and now because we think any speed increase in a reasonably-optimized web app should be a trivial payoff for non-self-documenting argument order. Discuss!

(d) We built in replication support. A database object requested as a reader picks a random slave to read from for the duration of your HTTP request (or the backend task). We can weight them manually, and we have a task that reweights them automatically. (If we do something stupid and wedge a slave's replication thread, every Slash process, across 17 machines, starts throttling back its connections to that machine within 10 seconds. This was originally written to handle slave DBs getting bogged down by load, but with our new faster DBs, that just never happens, so if a slave falls behind, one of us probably typed something dumb at the mysql> prompt.)

(e) We bolted on memcached support. Why bolted-on? Because back when we first tried memcached, we got a huge performance boost by caching our three big data types (users, stories, comment text) and we're pretty sure additional caching would provide minimal benefit at this point. Memcached's main use is to get and set data objects, and Slash doesn't really bottleneck that way.

Slash 1.0 was written way back in early 2000 with decent support for get and set methods to abstract objects out of a database (getDescriptions, subclassed _wheresql) -- but over the years we've only used them a few times. Most data types that are candidates to be objectified either are processed in large numbers (like tags and comments), in ways that would be difficult to do efficiently by subclassing, or have complicated table structures and pre- and post-processing (like users) that would make any generic objectification code pretty complicated. So most data access is done through get and set methods written custom for each data type, or, just as often, through methods that perform one specific update or select.

Overall, we're pretty happy with the database side of things. Most tables are fairly well normalized, not fully but mostly, and we've found this improves performance in most cases. Even on a fairly large site like Slashdot, with modern hardware and a little thinking ahead, we're able to push code and schema changes live quickly. Thanks to running multiple-master replication, we can keep the site fully live even during blocking queries like ALTER TABLE. After changes go live, we can find performance problem spots and optimize (which usually means caching, caching, caching, and occasionally multi-pass log processing for things like detecting abuse and picking users out of a hat who get mod points).

In fact, I'll go further than "pretty happy." Writing a database-backed web site has changed dramatically over the past seven years. The database used to be the bottleneck: centralized, hard to expand, slow. Now even a cheap DB server can run a pretty big site if you code defensively, and thanks to Moore's Law, memcached, and improvements in open-source database software, that part of the scaling issue isn't really a problem until you're practically the size of eBay. It's an exciting time to be coding web applications
<!-- end template: ID 283, dispStory;misc;default -->
分享到:
评论

相关推荐

    slashdot-api:slashdot api 用于 Slashdot.com 的报废、存储和访问。 为 http 构建

    SlashdotPostings 和(最重要的)其相关 URL 的基本元数据: 作者标题永久链接正文后 URL背景此 Slashdot-API 旨在为 Discuss-It 应用程序提供服务,并提供对 Slashdot 帖子的轻松访问和搜索。 Slashdot 没有官方 ...

    quiet-slashdot-reader:安静地阅读Slashdot

    )它能做什么安静的Slashdot Reader可以从Slashdot主页上删除菜单栏和广告,从而获得更安静的阅读体验。如何安装在Chrome中: 转到扩展页面勾选“开发人员模式” 点击“加载解压的扩展程序...” 导航到您下载/签出的...

    Slashdot数据集

    Slashdot 数据集 社交网络数据集

    阶段事件驱动架构设计论文

    因此,加州大学伯克利分校的研究人员Matt Welsh、David Culler和Eric Brewer提出了一种新的设计思路——阶段事件驱动架构(Staged Event-Driven Architecture,简称SEDA),旨在支持大规模并发的互联网服务,并简化...

    soc-Slashdot0902.txt.gz

    推荐系统数据集

    lotus_slashdot_DoS.nasl

    lotus_slashdot_DoS

    Slashdot Adwrap Remover-crx插件

    这种方式不会对页面的其他部分造成影响,确保用户可以流畅地阅读和浏览 Slashdot 网站上的内容。同时,由于它只针对特定网站的广告,因此不会对整个网络的广告展示造成全局性影响。 安装"Slashdot Adwrap Remover-...

    PageRank-and-BFS:数据集:soc-Slashdot0811,roadNet-CA,soc-LiveJournal1 http

    PageRank和BFS 数据集: soc-Slashdot0811 77,360个节点,905,468个边缘。 roadNet-CA 1,965,206节点,2,766,607边缘。 soc-LiveJournal1 4,847,571节点,68,993,773边。

    antisamy xml

    6. antisamy-slashdot.xml:Slashdot是一个科技新闻网站,这个配置可能是为了适应其社区讨论区的需求,保持一定的内容安全标准。 学习和理解这些XML配置文件的内容,开发者可以定制自己的策略,根据应用场景调整...

    海外网络推广目录

    2. **Slashdot** (http://slashdot.org/):Slashdot 是一个科技新闻和技术评论的网站,拥有很高的PageRank值(PR9)。在这里分享内容有助于吸引科技领域的专业人士关注。 3. **Digg** (http://digg.com/):Digg 是...

    免费下载 antisamy策略文件及其使用说明.zip

    antisamy.xml,antisamy-anythinggoes.xml,antisamy-ebay.xml,antisamy-myspace.xml,antisamy-slashdot.xml,antisamy-tinymce.xml

    [jQuery实战第二版].jQuery.in.Action.2nd.Edition.pdf

    Ross**(Web Developer, Slashdot Contributor):本书以其出色的技术深度、丰富的示例代码和易于接近的风格,是任何希望最大化利用 JavaScript 功能的 Web 开发者宝贵的资源。 #### 二、主要内容概述 - **第1...

    McGraw-Hill.Linux.The.Complete.Reference.6th.Edition.2008

    担任站点总监,负责诸如SourceForge.net、Linux.com、Slashdot.org、freshmeat.net和ThinkGeek.com等网站的管理工作。 #### 四、书籍内容概览 《McGraw-Hill.Linux.The.Complete.Reference.6th.Edition.2008》涵盖...

    mailslash - slashdot.jp checker-开源

    这是一个小的 Perl 脚本,用于检查 http://slashdot.jp/ 中的新文章。 它从 slashdot.rdf 或 comments.pl 中挑选新文章,并通过电子邮件发布它们。 轻快:-)

    身为CTO不能不了解的-Amazon WebService

    例如,Amazon EC2和S3可以帮助应对流量激增(如“Slashdot/Digg/TechCrunch效应”)和季节性波动,确保服务始终可用。 【商业模式与优势】 AWS采用一套API和商业模型,允许开发者访问亚马逊的基础设施和内容。服务...

    ASP Slashdot Headline Parser-开源

    ASP Slashdot Headline Parser是一个简单的Active Server Page(ASP)脚本,它可以获取最新的slashdot.xml文件,对其进行解析并以HTML Table格式显示标题。

    Slash::Gallery-开源

    Slash::Gallery是一个基于开源软件项目Slash的图片库插件,主要设计用于管理和展示与Slashdot网站类似的社区平台上的图像资源。Slashdot是一款流行的开源新闻和讨论系统,它允许用户提交新闻、评论,并进行互动。...

    jQuery.in.Action.2nd.Edition.Jun.2010.pdf

    Ross**(Web Developer、Slashdot Contributor):本书以其强大的技术深度、广泛的示例代码和友好的写作风格成为任何希望最大化利用JavaScript功能的Web开发者的宝贵资源。 #### 三、核心内容概览 1. **jQuery基础...

Global site tag (gtag.js) - Google Analytics