Scaling an early stage startup
by Mark Maunder <mark@feedjit.com>
Why does performance and scaling quickly matter?
• Slow performance could cost you 20% of your revenue according to Google.
• Any reduction in hosting costs goes directly to your bottom line as profit or can accelerate growth.
• In a viral business, slow performance can damage your viral growth.
My first missteps
• Misconfiguration. Web server and DB configured to grab too much RAM.
• As traffic builds, the server swaps and slows down drastically.
• Easy to fix – just a quick config change on web server and/or DB.
Traffic at this stage
• 2 Widgets per second • 10 HTTP requests per second.
• 1 Widget = 1 Pageview • We serve as many pages as our users do, combined.
Keepalive – Good for clients, bad for servers.
• As http requests increased to 10 per second, I ran out of server threads to handle connections.
• Keepalive was on and Keepalive Timeout was set to 300.
• Turned Keepalive off.
Traffic at this stage
• 4 Widgets per second
• 20 HTTP requests per second
Cache as much DB data as possible
• I used Perl’s Cache::FileCache to cache either DB data or rendered HTML on disk.
• MemCacheD, developed for LiveJournal, caches across servers.
• YMMV – How dynamic is your data?
MySQL not fast enough
• High number of writes & deletes on a large single table caused severe slowness.
• Writes blow away the query cache.
• MySQL doesn’t support a large number of small tables (over 10,000).
• MySQL is memory hungry if you want to cache large indexes.
• I maxed out at about 200 concurrent read/write queries per second with over 1 million
records (and that’s not large enough).
Perl’s Tie::File to the early rescue
• Tie::File is a very simple flat-file API.
• Lots of files/tables.
• Faster – 500 to 1000 concurrent read/writes per second.
• Prepending requires reading and rewriting the whole file.
BerkeleyDB is very very fast!
• I’m also experimenting with BerkeleyDB for some small intensive tasks.
• Data From Oracle who owns BDB: Just over 90,000 transactional writes per second.
• Over 1 Million non-transactional writes per second in memory.
• Oracle’s machine: Linux on an AMD Athlon™ 64 processor 3200+
at 1GHz system with 1GB of RAM. 7200RPM Drive with 8MB cache RAM.
Source: http://www.oracle.com/technology/products/berkeley-db/pdf/berkeley-db-perf.pdf
Traffic at this stage
• 7 Widgets per second
• 35 HTTP requests per second
Created a separate image and CSS server
• Enabled Keepalive on the Image server to be nice to clients.
• Static content requires very little memory per thread/process.
• Kept Keepalive off on the App server to reduce memory.
• Added benefit of higher browser concurrency with 2 hostnames.
Source: http://www.die.net/musings/page_load_time/
Now using Home Grown Fixed Length Records
• A lot like ISAM or MyISAM
• Fixed length records mean we seek directly to the data. No more file slurping.
• Sequential records mean sequential reads which are fast.
• Still using file level locking.
• Benchmarked at 20,000+ concurrent reads/writes/deletes.
Traffic at this stage
• 12 Widgets per second
• 50 to 60 HTTP requests per second
• Load average spiking to 12 or more about 3 times per day for unknown reason.
Blocking Content Thieves
• Content thieves were aggressively crawling our site on pages that are CPU intensive.
• Robots.txt is irrelevant.
• Reverse DNS lookup with ‘dig –x’
• Firewall the &^%$@’s with ‘iptables’
Moved to httpd.prefork
• Httpd.worker consumes more memory than prefork because worker doesn’t share memory.
• Tuning the number of Perl interpreters vs number of threads didn’t improve things.
• Prefork with no keepalive on the app server uses less RAM and works well – for Mod_Perl.
The amazing Linux Filesystem Cache
• Linux uses spare memory to cache files on disk.
• Lots of spare memory == Much faster I/O.
• Prefork freed lots of memory. 1.3 Gigs out of 2 Gigs is used as cache.
• I’ve noticed a roughly 20% performance increase since using it.
Tools
• httperf for benchmarking your server • Websitepulse.com for perf monitoring.
Summary
• Make content as static as possible. • Cache as much of your dynamic content as possible.
• Separate serving app requests and serving static content.
• Don’t underestimate the speed of lightweight file access API’s .
• Only serve real users and search engines you care about.
分享到:
相关推荐
Ja-Naé Duane is an entrepreneur, researcher, startup advisor, and professor of entrepreneurship & innovation. Table of Contents SECTION 1 DISCOVERING THE EQUATION Chapter 1. The Age of YOU: The ...
yolo
Auto Scaling是亚马逊推出的弹性计算云(Amazon EC2)的一项Web服务,它能够根据用户设定的策略自动调整EC2实例的运行数量,以适应应用的负载变化。这项服务有助于维持应用的高可用性和扩展性,确保应用能够根据实际...
Auto Scaling 可帮助确保您拥有适量的 Amazon EC2 实例来处理您的应用程序负载。您可创建 EC2 实例的 集合,称为 Auto Scaling 组 。您可以指定每个 Auto Scaling 组中最少的实例数量,Auto Scaling 会确保您的 组中...
Scaling, Emergence, and Reasoning in Large Language Models Scaling,Emergence 和 Reasoning 是大型语言模型中的三个重要概念,本文将对这三个概念进行详细的解释和分析。 Scaling Scaling 是指语言模型中...
SAR成像中,chirp scaling是一种非常重要的算法。
在本实验中,我们将探讨如何利用Amazon Web Services (AWS) 的Elastic Load Balancing (ELB) 和 Auto Scaling 功能来构建一个弹性且高可用的基础设施。这两个服务是云架构的关键组成部分,它们确保了应用程序在面对...
Scaling Big Data with Hadoop and Solr is a step-by-step guide that helps you build high performance enterprise search engines while scaling data. Starting with the basics of Apache Hadoop and Solr, ...
- **Algorithmic Approach**: The paper likely presents an algorithmic approach for cognitive radios to determine the appropriate transmit power level based on the received SNR and the distance to ...
Hckers.Guide.Scaling.Python Hckers.Guide.Scaling.Python Hckers.Guide.Scaling.Python
Scaling Software Agility
《Scaling MongoDB》一书由Kristina Chodorow撰写,详细介绍了MongoDB如何在分布式环境中进行扩展,以支持大规模数据集和高并发访问。本书涵盖了从理论到实践的各个方面,为读者提供了深入理解MongoDB扩展机制的基础...
Data Algorithms Recipes for Scaling Up with Hadoop and Spark 英文epub 本资源转载自网络,如有侵权,请联系上传者或csdn删除 本资源转载自网络,如有侵权,请联系上传者或csdn删除
在《Scaling Instagram》这份由Mike Krieger(Instagram联合创始人)于2012年AirBnB技术讲座上分享的演讲中,他详细介绍了Instagram是如何从一个初创项目成长为拥有3000多万用户的应用程序,并在此过程中解决了哪些...
- 编辑`/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor`文件,设置为所选策略。 ##### 4.6 加载内核模块 - 使用`/etc/modules-load.d/cpufreq.conf`配置文件添加所需的模块名称。 - 使用`update-...
A Sharp Scaling是一款图片无损放大软件,免费好用,无需破解。
《MadGoat SSAA and Resolution Scaling 1.3:Unity中的高级抗锯齿与分辨率缩放技术》 在游戏开发领域,图像质量是吸引玩家的关键因素之一,而抗锯齿和分辨率缩放技术则直接关系到游戏画面的细腻度和流畅度。...
Starting and Scaling DevOps in the Enterprise,Starting and Scaling DevOps in the Enterprise