- 浏览: 223861 次
- 性别:
- 来自: 上海
文章分类
最新评论
-
Breather.杨:
斯库伊!受教
基于按annotation的hibernate主键生成策略 -
w420372197:
很详细,学习中..转载了
基于按annotation的hibernate主键生成策略 -
wslovenide:
...
基于按annotation的hibernate主键生成策略 -
Navee:
写的十分详细!感谢
基于按annotation的hibernate主键生成策略 -
eric.cheng:
很好,学习了
基于按annotation的hibernate主键生成策略
LinkedIn Architecture
At JavaOne 2008 , LinkedIn employees presented two sessions about the LinkedIn architecture. The slides are available online:
- LinkedIn - A Professional Social Network Built with Java™ Technologies and Agile Practices
- LinkedIn Communication Architecture
These slides are hosted at SlideShare. If you register then you can download them as PDF’s.
This post summarizes the key parts of the LinkedIn architecture. It’s based on the presentations above, and on additional comments made during the presentation at JavaOne.
Site Statistics
- 22 million members
- 4+ million unique visitors/month
- 40 million page views/day
- 2 million searches/day
- 250K invitations sent/day
- 1 million answers posted
- 2 million email messages/day
Software
- Solaris (running on Sun x86 platform and Sparc)
- Tomcat and Jetty as application servers
- Oracle and MySQL as DBs
- No ORM (such as Hibernate); they use straight JDBC
- ActiveMQ for JMS. (It’s partitioned by type of messages. Backed by MySQL.)
- Lucene as a foundation for search
- Spring as glue
Server Architecture
2003-2005
- One monolithic web application
- One database: the Core Database
- The network graph is cached in memory in The Cloud
- Members Search implemented using Lucene. It runs on the same server as The Cloud, because member searches must be filtered according to the searching user’s network, so it’s convenient to have Lucene on the same machine as The Cloud.
- WebApp updates the Core Database directly. The Core Database updates The Cloud.
2006
- Added Replica DB’s , to reduce the load on the Core Database. They contain read-only data. A RepDB server manages updates of the Replica DB’s.
- Moved Search out of The Cloud and into its own server.
- Changed the way updates are handled, by adding the Databus
. This is a central component that distributes updates to any component that needs them. This is the new updates flow:
- Changes originate in the WebApp
- The WebApp updates the Core Database
- The Core Database sends updates to the Databus
- The Databus sends the updates to: the Replica DB’s, The Cloud, and Search
2008
- The WebApp doesn’t do everything itself anymore: they split parts of its business logic into Services
.
The WebApp still presents the GUI to the user, but now it calls Services to manipulate the Profile, Groups, etc. - Each Service has its own domain-specific database (i.e., vertical partitioning).
- This architecture allows other applications (besides the main WebApp) to access LinkedIn. They’ve added applications for Recruiters, Ads, etc.
The Cloud
- The Cloud is a server that caches the entire LinkedIn network graph in memory.
- Network size: 22M nodes, 120M edges.
- Requires 12 GB RAM .
- There are 40 instances in production
- Rebuilding an instance of The Cloud from disk takes 8 hours .
- The Cloud is updated in real-time using the Databus.
- Persisted to disk on shutdown.
- The cache is implemented in C++, accessed via JNI. They chose C++ instead of Java for two reasons:
- To use as little RAM as possible.
- Garbage Collection pauses were killing them. [LinkedIn said they were using advanced GC's, but GC's have improved since 2003; is this still a problem today?]
- Having to keep everything in RAM is a limitation, but as LinkedIn have pointed out, partitioning graphs is hard.
- [Sun offers servers with up to 2 TB of RAM (Sun SPARC Enterprise M9000 Server ), so LinkedIn could support up to 1.1 billion users before they run out of memory. (This calculation is based only on the number of nodes, not edges). Price is another matter: Sun say only "contact us for price", which is ominous considering that the prices they do list go up to $30,000.]
The Cloud caches the entire LinkedIn Network, but each user needs to see the network from his own point of view. It’s computationally expensive to calculate that, so they do it just once when a user session begins, and keep it cached. That takes up to 2 MB of RAM per user. This cached network is not updated during the session. (It is updated if the user himself adds/removes a link, but not if any of the user’s contacts make changes. LinkedIn says users won’t notice this.)
As an aside, they use Ehcache to cache members’ profiles. They cache up to 2 million profiles (out of 22 million members). They tried caching using LFU algorithm (Least Frequently Used), but found that Ehcache would sometimes block for 30 seconds while recalculating LFU, so they switched to LRU (Least Recently Used).
Communication Architecture
Communication Service
The Communication Service is responsible for permanent messages , e.g. InBox messages and emails.
- The entire system is asynchronous and uses JMS heavily
- Clients post messages via JMS
- Messages are then routed via a routing service to the appropriate mailbox or directly for email processing
- Message delivery: either Pull (clients request their messages), or Push (e.g., sending emails)
- They use Spring, with proprietary LinkedIn Spring extensions. Use HTTP-RPC.
Scaling Techniques
- Functional partitioning: sent, received, archived, etc. [a.k.a. vertical partitioning]
- Class partitioning: Member mailboxes, guest mailboxes, corporate mailboxes
- Range partitioning: Member ID range; Email lexicographical range. [a.k.a. horizontal partitioning]
- Everything is asynchronous
Network Updates Service
The Network Updates Service is responsible for short-lived notifications , e.g. status updates from your contacts.
Initial Architecture (up to 2007)
- There are many services that can contain updates.
- Clients make separate requests to each service that can have updates: Questions, Profile Updates, etc.
- It took a long time to gather all the data.
In 2008 they created the Network Updates Service. The implementation went through several iterations:
Iteration 1
- Client makes just one request, to the NetworkUpdateService.
- NetworkUpdateService makes multiple requests to gather the data from all the services. These requests are made in parallel.
- The results are aggregated and returned to the client together.
- Pull-based architecture.
- They rolled out this new system to everyone at LinkedIn, which caused problems while the system was stabilizing. In hindsight, should have tried it out on a small subset of users first.
Iteration 2
- Push-based architecture: whenever events occur in the system, add them to the user’s "mailbox". When a client asks for updates, return the data that’s already waiting in the mailbox.
- Pros: reads are much quicker since the data is already available.
- Cons: might waste effort on moving around update data that will never be read. Requires more storage space.
- There is still post-processing of updates before returning them to the user. E.g.: collapse 10 updates from a user to 1.
- The updates are stored in CLOB’s: 1 CLOB per update-type per user (for a total of 15 CLOB’s per user).
- Incoming updates must be added to the CLOB. Use optimistic locking to avoid lock contention.
- They had set the CLOB size to 8 kb, which was too large and led to a lot of wasted space.
- Design note: instead of CLOB’s, LinkedIn could have created additional tables, one for each type of update. They said that they didn’t do this because of what they would have to do when updates expire: Had they created additional tables then they would have had to delete rows, and that’s very expensive.
- They used JMX to monitor and change the configuration in real-time. This was very helpful.
Iteration 3
- Goal: improve speed by reducing the number of CLOB updates, because CLOB updates are expensive.
- Added an overflow buffer: a VARCHAR(4000) column where data is added initially. When this column is full, dump it to the CLOB. This eliminated 90% of CLOB updates.
- Reduced the size of the updates.
[LinkedIn have had success in moving from a Pull architecture to a Push architecture. However, don't discount Pull architectures. Amazon, for example, use a Pull architecture. In A Conversation with Werner Vogels , Amazon's CTO, he said that when you visit the front page of Amazon they typically call more than 100 services in order to construct the page.]
The presentation ends with some tips about scaling. These are oldies but goodies:
- Can’t use just one database. Use many databases, partitioned horizontally and vertically.
- Because of partitioning, forget about referential integrity or cross-domain JOINs.
- Forget about 100% data integrity.
- At large scale, cost is a problem: hardware, databases, licenses, storage, power.
- Once you’re large, spammers and data-scrapers come a-knocking.
- Cache!
- Use asynchronous flows.
- Reporting and analytics are challenging; consider them up-front when designing the system.
- Expect the system to fail.
- Don’t underestimate your growth trajectory.
发表评论
-
大型网站架构不得不考虑的10个问题
2009-01-16 14:41 1159大型网站架构不得不考虑的10个问题 来自CSDN:http:/ ... -
规划 SOA 参考架构
2009-01-07 16:22 2482规划 SOA 参考架构 2007-12-03 09: ... -
架构师书单
2009-01-07 16:09 1725架构师书单 一、S ... -
架构师之路
2009-01-07 16:07 5139架构师之路 什么是软件架构师? 架构 ... -
应用架构选型讨论
2008-12-10 09:29 1234应用架构选型讨论(PPT) ... -
系统构架设计应考虑的因素
2008-11-24 17:23 3255系统构架设计应考虑的 ... -
负载均衡--大型在线系统实现的关键(服务器集群架构的设计与选择)
2008-11-24 17:19 5727负载均衡--大型在 ... -
eBay Architecture
2008-11-24 16:14 1948eBay Architecture Tue, 05/27/2 ... -
LiveJournal Architecture
2008-11-24 16:13 1103LiveJournal Architecture Mon, ... -
Google Architecture
2008-11-24 16:09 1317Google Architecture Sun, 11/23 ... -
YouTube Architecture
2008-11-24 16:07 1547YouTube Architecture Thu, 03/1 ... -
Flickr Architecture
2008-11-24 16:05 1341Flickr Architecture Wed, 11/14 ... -
Digg Architecture
2008-11-24 16:03 1309Digg Architecture Mon, 09/15/2 ... -
37signals Architecture
2008-11-24 16:02 119537signals Architecture Thu, 09 ... -
Scaling Twitter: Making Twitter 10000 Percent Fast
2008-11-24 15:59 1297Scaling Twitter: Making Twitter ... -
Amazon Architecture
2008-11-24 15:58 1221Amazon Architecture Tue, 09/18 ... -
Facebook 海量数据处理
2008-11-24 15:54 1857Facebook 海量数据处理 作者: F ... -
Scalability Best Practices: Lessons from eBay
2008-11-24 15:50 1158Scalability Best Practices: Le ... -
Yapache-Yahoo! Apache 的秘密
2008-11-24 02:15 1199Yapache-Yahoo! Apache 的秘密 作 ... -
Notes from Scaling MySQL - Up or Out
2008-11-24 02:14 1504Notes from Scaling MySQL - Up o ...
相关推荐
LinkedIn Communication Architecture Presentation 2.ppt
LinkedIn产品的架构设计(LinkedIn Product Architecture)对于不同的产品线有不同的数据架构设计。比如“LinkedIn Today”需要处理不断变化的数据集(Moving dataset),并且支持高写入和读取频率,并要求数据新鲜...
本文将深入探讨"Streaming Architecture New Designs Using Apache Kafka and MapR Streams"这一主题,阐述如何利用这两种强大的工具构建高效、可扩展的流处理系统。 Apache Kafka是一种分布式流处理平台,由...
Voldemort Read-Only (RO) Architecture Voldemort RO 架构是一种 Offline Index 构建和数据分区方法,使用 MapReduce 在 Hadoop 上进行处理。它具有以下特点: * 可扩展的 Offline Index 构建 * 数据分区使用 ...
在Android应用开发中,架构设计是至关重要的,它关乎到应用程序的可维护性、扩展性和稳定性。本课程“Android开发基础培训:...在LinkedIn的学习课程中,你将有机会深入理解并掌握这些概念,提升你的Android开发技能。
**Apache Kafka** 是一个开源流处理平台,由LinkedIn开发并捐赠给Apache软件基金会。Kafka 设计用于构建可扩展的实时数据管道和流应用程序。它提供了一个统一的、低延迟的平台,用于发布和订阅记录流,存储和处理...
该应用程序是使用视频游戏的api云制作的,用于解释Clean Architecture和MVP(模型-视图-演示器)的用法。 内置: 数据来源:适用于游戏《巨型炸弹》的Api 依赖注入: 库 查看注入:( 库 Http客户端: 库 图像...
干净的建筑扑扑应用程序 由 , , 和Flutter sdk组成的干净的架构新闻应用程序。 主要目标是使用测试驱动设计风格的架构(受启发)来构建可读取,可维护,可测试和高质量的Flutter应用。... 奥马尔Gamliel - LinkedIn
领英Peoch Guillaume、Thevenoux Jean-Damien、Eddeghai Amine、Besacier Thibaut Project Architecture Ntiers 如何安装: 配置: JDK 1.8 月蚀月神玻璃鱼 4 在eclipse下导入为现有的maven项目.. 右键单击项目,Run...
5. **linkedin_communication_architecture.pdf** - LinkedIn的通信架构可能包含如何处理职业网络中的消息传递、状态同步以及用户连接的高效管理。 6. **amazon_dynamo_sosp2007.pdf** - Amazon Dynamo是亚马逊的一...
Arguably, firms like Google, eBay, LinkedIn, and Facebook were built around big data from the beginning. They didn’t have to reconcile or integrate big data with more traditional sources of data and...
她是全球性信息架构组则The information architecture institute的创办人和首任主席,著名信息架构网站Boxes and Arrows的创办者。现任世界最大的专业人士社区网站LinkedIn的首席产品经理。此前曾经在Yahoo!任高级...
There is no better time to learn Spark than now. Spark has become one of the critical components in the big data stack because of ...architecture, and the various components inside the Apache Spark stack
Chapter 1 AWS Architecture Overview Chapter 2 Getting Started Chapter 3 Basic Instance Management Chapter 4 Elastic Block Storage Chapter 5 Virtual Private Cloud Chapter 6 Advanced Instance Management...
You will also understand how to migrate a monolithic application to the microservice architecture while keeping scalability and best practices in mind. Furthermore you will get into a few important ...
如果您希望推广Docx4J,您可以通过多种方式进行,比如发送包含用户评价的邮件、发表博客文章、发布推文、在网上论坛中发表有用评论、在Docx4J博客上分享内容、关注Twitter上的开发者或在LinkedIn上建立连接等。...
Kudu是Cloudera推出的开源列式存储系统,设计用于支持快速的数据分析操作,而Kafka是LinkedIn开发的一个分布式流处理平台,主要用于构建实时数据管道和流应用程序。 从给定文件的内容可以看出,文档涉及到如何构建...
标题中的"streaming-mybook.rar"暗示了这是一个关于流处理技术的资料压缩包,而描述中提到的"streaming architecture new designs using apache kafka and mapr streams"则具体指出了涉及的技术,即Apache Kafka和...