- 浏览: 520613 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (563)
- 工作经验 (12)
- 数据库 (13)
- Servlet (10)
- Struts2 (1)
- Spring (25)
- Eclipse (5)
- Hibernate (5)
- Eclips (8)
- HTTP (7)
- J2EE (21)
- EHcache (1)
- HTML (11)
- 工具插件使用 (20)
- JPA (2)
- 杂谈 (17)
- 数据结构与算法 (3)
- Cloud Foundry (1)
- 安全 (10)
- J2SE (57)
- SQL (9)
- DB2 (6)
- 操作系统 (2)
- 设计模式 (1)
- 版本代码管理工具 (13)
- 面试 (10)
- 代码规范 (3)
- Tomcat (12)
- Ajax (5)
- 异常总结 (11)
- REST (2)
- 云 (2)
- RMI (3)
- SOA (1)
- Oracle (12)
- Javascript (20)
- jquery (7)
- JSP自定义标签 (2)
- 电脑知识 (5)
- 浏览器 (3)
- 正则表达式 (3)
- 建站解决问题 (38)
- 数据库设计 (3)
- git (16)
- log4j (1)
- 每天100行代码 (1)
- socket (0)
- java设计模式 耿祥义著 (0)
- Maven (14)
- ibatis (7)
- bug整理 (2)
- 邮件服务器 (8)
- Linux (32)
- TCP/IP协议 (5)
- java多线程并发 (7)
- IO (1)
- 网页小工具 (2)
- Flash (2)
- 爬虫 (1)
- CSS (6)
- JSON (1)
- 触发器 (1)
- java并发 (12)
- ajaxfileupload (1)
- js验证 (1)
- discuz (2)
- Mysql (14)
- jvm (2)
- MyBatis (10)
- POI (1)
- 金融 (1)
- VMWare (0)
- Redis (4)
- 性能测试 (2)
- PostgreSQL (1)
- 分布式 (2)
- Easy UI (1)
- C (1)
- 加密 (6)
- Node.js (1)
- 事务 (2)
- zookeeper (3)
- Spring MVC (2)
- 动态代理 (3)
- 日志 (2)
- 微信公众号 (2)
- IDEA (1)
- 保存他人遇到的问题 (1)
- webservice (11)
- memcached (3)
- nginx (6)
- 抓包 (1)
- java规范 (1)
- dubbo (3)
- xwiki (1)
- quartz (2)
- 数字证书 (1)
- spi (1)
- 学习编程 (6)
- dom4j (1)
- 计算机系统知识 (2)
- JAVA系统知识 (1)
- rpcf (1)
- 单元测试 (2)
- php (1)
- 内存泄漏cpu100%outofmemery (5)
- zero_copy (2)
- mac (3)
- hive (3)
- 分享资料整理 (0)
- 计算机网络 (1)
- 编写操作系统 (1)
- springboot (1)
最新评论
-
masuweng:
亦论一次OutOfMemoryError的定位与解错 -
变脸小伙:
引用[color=red][/color]百度推广中运用的技术 ...
Spring 3 mvc中返回pdf,json,xml等不同的view -
Vanillva:
不同之处是什么??
Mybatis中的like查询 -
thrillerzw:
转了。做个有理想的程序员
有理想的程序员必须知道的15件事 -
liujunhui1988:
觉得很有概括力
15 个必须知道的 Java 面试问题(2年工作经验)
源:http://tutorials.jenkov.com/java-networking/protocol-design.html
评:
Table of Contents
Client - Server Roundtrips
Demarcating the End of Requests and Responses
Penetrating Firewalls
If you are designing a client-server system you may also have to design a communication protocol between the client and the server. Of course, sometimes this protocol is already have been decided for you, e.g. HTTP, XML-RPC (XML over HTTP), or SOAP (also XML over HTTP). But once in a while the protocol decision is open, so let's look at a few issued you may want to think about when designing your client - server protocol:
Client - Server Roundtrips
Demarcating the end of requests and responses
Penetrating Firewalls
Client - Server Roundtrips
When a client and server communicates to perform some operation they exchange information. For instance, the client will ask for a service to be performed, and the server will attempt to perform it, and send back a response telling the client of the result. Such an exchange of information between the client and server is called a roundtrip.
When a computer (client or server) sends data to another computer over the internet it takes some time from the time the data is sent, to the data is received at the other end. This is the time it takes the data to travel over the internet. This time is called latency.
The more roundtrips you have in your protocol, the slower the protocol becomes, especially if latency is high. The HTTP protocol consists of only a single request and a single response to perform its service. A single roundtrip in other words. The SMTP protocol on the other hand, consists of several roundtrips between the client and the server before an email is sent.
The only reason to break your protocol up into multiple roundtrips is, if you have a large amount of data to send from the client to the server. You have two options in this case:
Send the header information in a separate roundtrip.
Break the message body up into smaller chunks.
Sending the header in a separate roundtrip (the first) can be smart if the server can do some initial pre-validation of e.g. header information. If that header information is invalid, sending the large body of data would have been a waste anyways.
If the network connection fails while you are transfering a large amount of data, you may have to resend all that data from scratch. By breaking the data up into smaller chunks you only have to resend the chunks from the chunk where the network connection failed and onwards. The successfully transfered chunks do not have be resent.
Demarcating the End of Requests and Responses
If your protocol allows multiple requests to be send over the same connection, you need some way for the server to know when one request ends, and a new begins. The client also needs to know when one response ends, and another begins.
You have two options for demarcating the end of a request:
Send the length in bytes of the request in the beginning of the request.
Send an end-of-request marker after the request data.
HTTP uses the first mechanism. In one of the request headers the "Content-Length" is sent. This header tells how many bytes after the headers that belongs to the request.
The advantage of this model is that you don't have the overhead of the end-of-request marker. Nor do you have to encode the body of the data to avoid the data looking like the end-of-request marker.
The disadvantage of the first method is that the sender must know how many bytes are transfered before the data is transfered. If the data is generated dynamically you will first have to buffer all the data before sending it, to count the number of bytes.
By using an end-of-request marker you don't have to know how many bytes you are sending. You just need to send an end-of-request marker at the end of the data. You do, however, have to make sure that the data sent does not contain any data that can be mistaken for the end-of-request marker. Here one way to do that:
Lets say the end-of-request marker is the byte value 255. Of course the data can contain the value 255 too. So, for each byte in the data that contains the value 255 you add an extra byte, also with the value 255. The end-of-request marker is changed from the byte value 255 to 255 followed by the value 0. Here are the encodings summarized:
255 in data --> 255, 255
end-of-request --> 255, 0
The sequence 255, 0 can never occur in the data, since you are changing all 255's to 255,255. And, a 255,255,0 will not be mistaken for a 255,0. The first 255's will be interpreted together, and the last 0 by itself.
Penetrating Firewalls
Most firewalls block all other traffic than the HTTP protocol. Therefore it can be a good idea to layer your protocol ontop of HTTP, like XML-RPC, SOAP and REST does.
To layer your protocol ontop of HTTP you send your data forth and back between client and server inside HTTP requests and responses. Remember, an HTTP request and response can contain more than just text or HTML. You can send binary data inthere too.
The only thing that can be a little weird by layering your request ontop of the HTTP protocol is that an HTTP request must contain a "Host" header field. If you are designing a P2P protocol ontop of HTTP, your peers most likely won't be running multiple "Hosts". This required header field is in that situation an unnecessary overhead (but a small one).
评:
Table of Contents
Client - Server Roundtrips
Demarcating the End of Requests and Responses
Penetrating Firewalls
If you are designing a client-server system you may also have to design a communication protocol between the client and the server. Of course, sometimes this protocol is already have been decided for you, e.g. HTTP, XML-RPC (XML over HTTP), or SOAP (also XML over HTTP). But once in a while the protocol decision is open, so let's look at a few issued you may want to think about when designing your client - server protocol:
Client - Server Roundtrips
Demarcating the end of requests and responses
Penetrating Firewalls
Client - Server Roundtrips
When a client and server communicates to perform some operation they exchange information. For instance, the client will ask for a service to be performed, and the server will attempt to perform it, and send back a response telling the client of the result. Such an exchange of information between the client and server is called a roundtrip.
When a computer (client or server) sends data to another computer over the internet it takes some time from the time the data is sent, to the data is received at the other end. This is the time it takes the data to travel over the internet. This time is called latency.
The more roundtrips you have in your protocol, the slower the protocol becomes, especially if latency is high. The HTTP protocol consists of only a single request and a single response to perform its service. A single roundtrip in other words. The SMTP protocol on the other hand, consists of several roundtrips between the client and the server before an email is sent.
The only reason to break your protocol up into multiple roundtrips is, if you have a large amount of data to send from the client to the server. You have two options in this case:
Send the header information in a separate roundtrip.
Break the message body up into smaller chunks.
Sending the header in a separate roundtrip (the first) can be smart if the server can do some initial pre-validation of e.g. header information. If that header information is invalid, sending the large body of data would have been a waste anyways.
If the network connection fails while you are transfering a large amount of data, you may have to resend all that data from scratch. By breaking the data up into smaller chunks you only have to resend the chunks from the chunk where the network connection failed and onwards. The successfully transfered chunks do not have be resent.
Demarcating the End of Requests and Responses
If your protocol allows multiple requests to be send over the same connection, you need some way for the server to know when one request ends, and a new begins. The client also needs to know when one response ends, and another begins.
You have two options for demarcating the end of a request:
Send the length in bytes of the request in the beginning of the request.
Send an end-of-request marker after the request data.
HTTP uses the first mechanism. In one of the request headers the "Content-Length" is sent. This header tells how many bytes after the headers that belongs to the request.
The advantage of this model is that you don't have the overhead of the end-of-request marker. Nor do you have to encode the body of the data to avoid the data looking like the end-of-request marker.
The disadvantage of the first method is that the sender must know how many bytes are transfered before the data is transfered. If the data is generated dynamically you will first have to buffer all the data before sending it, to count the number of bytes.
By using an end-of-request marker you don't have to know how many bytes you are sending. You just need to send an end-of-request marker at the end of the data. You do, however, have to make sure that the data sent does not contain any data that can be mistaken for the end-of-request marker. Here one way to do that:
Lets say the end-of-request marker is the byte value 255. Of course the data can contain the value 255 too. So, for each byte in the data that contains the value 255 you add an extra byte, also with the value 255. The end-of-request marker is changed from the byte value 255 to 255 followed by the value 0. Here are the encodings summarized:
255 in data --> 255, 255
end-of-request --> 255, 0
The sequence 255, 0 can never occur in the data, since you are changing all 255's to 255,255. And, a 255,255,0 will not be mistaken for a 255,0. The first 255's will be interpreted together, and the last 0 by itself.
Penetrating Firewalls
Most firewalls block all other traffic than the HTTP protocol. Therefore it can be a good idea to layer your protocol ontop of HTTP, like XML-RPC, SOAP and REST does.
To layer your protocol ontop of HTTP you send your data forth and back between client and server inside HTTP requests and responses. Remember, an HTTP request and response can contain more than just text or HTML. You can send binary data inthere too.
The only thing that can be a little weird by layering your request ontop of the HTTP protocol is that an HTTP request must contain a "Host" header field. If you are designing a P2P protocol ontop of HTTP, your peers most likely won't be running multiple "Hosts". This required header field is in that situation an unnecessary overhead (but a small one).
发表评论
-
查看服务器网络连接数
2014-03-24 10:29 865源:http://my.oschina.net/shineyy ... -
获取IP公共方法
2014-01-01 00:44 500/** * 获取IP公共方法 * @pa ... -
Nmap扫描原理与用法【转载】
2013-12-10 15:43 1032源:http://blog.csdn.net/aspirati ... -
Nmap源码分析(基本框架)
2013-12-10 15:44 945源:http://blog.csdn.net/Aspirati ... -
常见的端口扫描类型及原理
2013-11-28 09:11 0源:http://www.360doc.com/content ...
相关推荐
21 Networking Using Datagram Protocol 22 Creating Internal Frames 23 Pluggable Look and Feel 24 UML Graphical Notations 25 Testing Classes Using JUnit 26 JNI (example provided by Leslie Sears) ...
SSD8 provides a survey of networking protocols and technology, multimedia networking, client/server design — including thick and thin clients, CORBA and related tools, WWW implementation issues, ...
- **J2EE DP**:Java 2 Platform Enterprise Edition Design Patterns,主要包括MVC(Model-View-Controller)、DAO(Data Access Object)、Business Delegate等模式,用于指导企业级应用的设计。 - **UML...
- **Networking Services**: Configuration of networking services such as DHCP, DNS, and static IP addresses. **Chapter 6 - Software Management** - **Package Management**: Introduction to package ...
- **Binary Protocols**: Binary protocols like MQTT or Protocol Buffers are more efficient than text-based protocols like XML. - **Custom Protocols**: Design custom protocols tailored to the specific ...
Filter traffic using filter servers, ActiveX, and Java filtering functions Learn how IP multicast and the FWSM interact Increase performance with firewall load balancing Configure IPv6 and asymmetric ...
- TCP/IP协议简介(Introduction to TCP/IP Protocol):TCP/IP协议是互联网的基础通信协议,定义了数据传输的规则。 8. 计算机应用系统(Computer Application Systems) - 管理信息系统(Management ...