- 浏览: 149247 次
- 性别:
- 来自: 北京
文章分类
最新评论
-
EclipseEye:
fair_jm 写道不错 蛮详细的 谢谢分享
SWT/JFace专题 --- SWT中Display和多线程 -
fair_jm:
不错 蛮详细的 谢谢分享
SWT/JFace专题 --- SWT中Display和多线程
@Public
@Stable
Objects that are read from/written to a database should implement DBWritable. DBWritable, is similar to Writable except that the write(PreparedStatement) method takes a PreparedStatement, and readFields(ResultSet) takes a ResultSet.
Implementations are responsible for writing the fields of the object to PreparedStatement, and reading the fields of the object from the ResultSet.
Example:
If we have the following table in the database :
CREATE TABLE MyTable (
counter INTEGER NOT NULL,
timestamp BIGINT NOT NULL,
);
then we can read/write the tuples from/to the table with :
public class MyWritable implements Writable, DBWritable {
// Some data
private int counter;
private long timestamp;
//Writable#write() implementation
public void write(DataOutput out) throws IOException {
out.writeInt(counter);
out.writeLong(timestamp);
}
//Writable#readFields() implementation
public void readFields(DataInput in) throws IOException {
counter = in.readInt();
timestamp = in.readLong();
}
public void write(PreparedStatement statement) throws SQLException {
statement.setInt(1, counter);
statement.setLong(2, timestamp);
}
public void readFields(ResultSet resultSet) throws SQLException {
counter = resultSet.getInt(1);
timestamp = resultSet.getLong(2);
}
}
---------------
@InterfaceAudience.Public
@InterfaceStability.Stable
public interface DBWritable {
/**
* Sets the fields of the object in the {@link PreparedStatement}.
* @param statement the statement that the fields are put into.
* @throws SQLException
*/
public void write(PreparedStatement statement) throws SQLException;
/**
* Reads the fields of the object from the {@link ResultSet}.
* @param resultSet the {@link ResultSet} to get the fields from.
* @throws SQLException
*/
public void readFields(ResultSet resultSet) throws SQLException ;
}
@Stable
Objects that are read from/written to a database should implement DBWritable. DBWritable, is similar to Writable except that the write(PreparedStatement) method takes a PreparedStatement, and readFields(ResultSet) takes a ResultSet.
Implementations are responsible for writing the fields of the object to PreparedStatement, and reading the fields of the object from the ResultSet.
Example:
If we have the following table in the database :
CREATE TABLE MyTable (
counter INTEGER NOT NULL,
timestamp BIGINT NOT NULL,
);
then we can read/write the tuples from/to the table with :
public class MyWritable implements Writable, DBWritable {
// Some data
private int counter;
private long timestamp;
//Writable#write() implementation
public void write(DataOutput out) throws IOException {
out.writeInt(counter);
out.writeLong(timestamp);
}
//Writable#readFields() implementation
public void readFields(DataInput in) throws IOException {
counter = in.readInt();
timestamp = in.readLong();
}
public void write(PreparedStatement statement) throws SQLException {
statement.setInt(1, counter);
statement.setLong(2, timestamp);
}
public void readFields(ResultSet resultSet) throws SQLException {
counter = resultSet.getInt(1);
timestamp = resultSet.getLong(2);
}
}
---------------
@InterfaceAudience.Public
@InterfaceStability.Stable
public interface DBWritable {
/**
* Sets the fields of the object in the {@link PreparedStatement}.
* @param statement the statement that the fields are put into.
* @throws SQLException
*/
public void write(PreparedStatement statement) throws SQLException;
/**
* Reads the fields of the object from the {@link ResultSet}.
* @param resultSet the {@link ResultSet} to get the fields from.
* @throws SQLException
*/
public void readFields(ResultSet resultSet) throws SQLException ;
}
发表评论
-
数据迁移相关(关系型数据库mysql,oracle和nosql数据库如hbase)
2015-04-01 15:15 737HBase数据迁移(1) http://www.importn ... -
zookeeper适用场景:如何竞选Master及代码实现
2015-04-01 14:53 795zookeeper适用场景:如何竞选Master及代码实现 h ... -
MR/hive 数据去重
2015-04-01 14:43 738海量数据去重的五大策略 http://www.ciotimes ... -
面试牛x题
2015-03-18 23:50 0hive、mr(各需三道) 1.分别使用Hadoop MapR ... -
使用shell并发上传文件到hdfs
2015-03-16 21:41 1274使用shell并发上传文件到hdfs http://mos19 ... -
hadoop集群监控工具Apache Ambari
2015-03-14 17:27 0Apache Ambari官网 http://ambari.a ... -
Hadoop MapReduce优化相关
2015-03-16 21:46 473[大牛翻译系列]Hadoop 翻译文章索引 http://ww ... -
数据倾斜问题 牛逼(1)数据倾斜之MapReduce&hive
2015-03-16 21:43 804数据倾斜总结 http://www.alidata.org/a ... -
MapReduce牛逼(4)WritableComparable接口
2015-03-12 08:57 607@Public @Stable A Writable whi ... -
MapReduce牛逼(3)(继承WritableComparable)实现自定义key键,实现二重排序
2015-03-12 08:57 649package sort; import jav ... -
MapReduce牛逼(2)MR简单实现 导入数据到hbase例子
2015-03-12 08:57 1283package cmd; /** * MapRe ... -
MapReduce牛逼(1)MR单词计数例子
2015-03-11 00:44 1213package cmd; import org. ... -
InputFormat牛逼(9)FileInputFormat实现类之SequenceFileInputFormat
2015-03-11 00:24 1410一、SequenceFileInputFormat及Seque ... -
InputFormat牛逼(8)FileInputFormat实现类之TextInputFormat
2015-03-11 00:19 583/** An {@link InputFormat} for ... -
InputFormat牛逼(6)org.apache.hadoop.mapreduce.lib.db.DBRecordReader<T>
2015-03-11 00:11 679@Public @Evolving A RecordRead ... -
InputFormat牛逼(5)org.apache.hadoop.mapreduce.lib.db.DBInputFormat<T>
2015-03-10 23:10 605@Public @Stable A InputFormat ... -
InputFormat牛逼(4)org.apache.hadoop.mapreduce.RecordReader<KEYIN, VALUEIN>
2015-03-10 22:50 372@Public @Stable The record rea ... -
InputFormat牛逼(3)org.apache.hadoop.mapreduce.InputFormat<K, V>
2015-03-10 22:46 663@Public @Stable InputFormat d ... -
InputFormat牛逼(2)org.apache.hadoop.mapreduce.InputSplit & DBInputSplit
2015-03-10 22:22 538@Public @Stable InputSplit rep ... -
如何把hadoop2 的job作业 提交到 yarn平台
2015-01-08 21:09 0aaa萨芬撒点
相关推荐
Hadoop2lib还可能包含Hadoop MapReduce库,这是实现MapReduce任务的关键,它提供了编写和执行MapReduce作业所需的类和接口。此外,YARN(Yet Another Resource Negotiator)作为Hadoop 2.x的新特性,是资源管理和...
Hadoop 代码使用方式 job.setInputFormatClass(SmallFileCombineTextInputFormat.class);...org.apache.hadoop.mapreduce.sample.SmallFileWordCount -Dmapreduce.input.fileinputformat.split.maxsize=10
1. **org.apache.hadoop.conf**: 这个包包含了处理配置参数的类,如`Configuration`,它是Hadoop中配置系统参数的主要接口。用户可以通过它来设置或获取Hadoop集群的相关配置,例如DFS的默认块大小、 Namenode地址等...
- **核心类库**:如`org.apache.hadoop.fs.FileSystem`、`org.apache.hadoop.mapreduce.Job`等,提供了与HDFS交互和MapReduce编程的基本接口。 4. **开发与调试** - **Hadoop API**:学习如何使用Hadoop API开发...
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; public class CustomSpaceDelimitedInputFormat extends FileInputFormat, Text> { @Override public RecordReader, Text> createRecordReader...
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class WordCount { public static class TokenizerMapper extends Mapper, Text, Text, IntWritable>{ private final static ...
- **新代码**:主要位于`org.apache.hadoop.mapreduce.*`,包含36,915行代码,这部分代码进行了重构,提高了代码质量和可维护性。 - 辅助类:分别位于`org.apache.hadoop.util.*`(148行)和`org.apache.hadoop.file...
在源码层面,org.apache.hadoop.mapreduce包包含了关键的接口和类。Writeable、Counter和ID相关类处理计数和标识,Context类提供Mapper和Reducer所需的上下文信息,Mapper、Reducer和Job类定义了MapReduce的基本操作...
1. **创建一个新的类**:首先,你需要创建一个继承自`org.apache.hadoop.mapreduce.InputFormat`的类。这个类将覆盖父类的方法来实现自定义的输入处理逻辑。 2. **实现`getSplits()`方法**:此方法用于将输入数据...
1. **org.apache.hadoop**: 这是Hadoop的核心库,包含了与HDFS、MapReduce以及其他核心服务相关的类和接口。例如,`FileSystem`接口提供了对HDFS的操作,如打开、关闭和读写文件;`JobConf`类用于配置MapReduce作业...
1. **hadoop-core.jar**:这是Hadoop的主要库,包含了MapReduce的基本组件,如InputFormat、OutputFormat、Mapper、Reducer等类。 2. **hadoop-client.jar**:这个JAR包包含了Hadoop客户端所需的全部依赖,包括HDFS...
相反,OutputFormat接口定义了如何将Reduce任务的输出写回存储系统,`org.apache.hadoop.mapreduce.lib.output.FileOutputFormat`则是常见的输出格式。 **Job与TaskTracker** 在MapReduce框架中,JobTracker是任务...
在Hadoop 2.x版本之后,MapReduce的API经历了一次重大的更新,产生了两个主要的版本:旧版的`org.apache.hadoop.mapred`和新版的`org.apache.hadoop.mapreduce`。本文将主要探讨旧版的`org.apache.hadoop.mapred`...
import org.apache.hadoop.mapreduce.Mapper; public class WordCountMapper extends Mapper, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new...
通常会有自定义的Mapper和Reducer类,它们继承自Hadoop提供的基类,如`org.apache.hadoop.mapreduce.Mapper`和`org.apache.hadoop.mapreduce.Reducer`。此外,可能还有自定义的Driver类,它负责配置和启动MapReduce...
它的核心组件包括HDFS(Hadoop Distributed File System)和MapReduce,而Java API是开发者与Hadoop交互的主要方式。本文将深入探讨Java如何操作Hadoop,以及在"Java-API-Operate-Hadoop.rar"压缩包中提供的资源。 ...
1. **Hadoop环境搭建**:讲解如何安装配置Hadoop分布式文件系统(HDFS)和MapReduce框架,包括集群部署和单机模拟。 2. **MapReduce编程模型**:介绍Map和Reduce函数的编写,以及Combiner和Partitioner的使用,它们...
编译环境:centos 6.4 64bit、maven 3.3.9、jdk1.7.0_79、lzo-2.09;...解决:hive报错:Cannot create an instance of InputFormat class org.apache.hadoop ....... as specified in mapredwork!
5. **Hadoop Interaction**:如果你的Nutch数据存储在Hadoop的HDFS上,那么你还需要使用Hadoop的API,如`FileSystem`和`InputFormat`,来读取和处理数据。 下面是一个简单的Java示例,展示如何从Nutch段中提取信息...
1. **Mapper类**:实现`org.apache.hadoop.mapreduce.Mapper`接口,处理输入键值对并生成中间键值对。 2. **Reducer类**:实现`org.apache.hadoop.mapreduce.Reducer`接口,负责对Map阶段的结果进行规约,生成最终...