- 浏览: 113087 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (109)
- hive (5)
- web (1)
- spring (7)
- struts2 (1)
- s2sh (2)
- mysql (3)
- hadoop (31)
- hbase (6)
- java (8)
- ubuntu (8)
- pig (2)
- Interview (2)
- zookeeper (1)
- system (1)
- 遥控 (1)
- linux (3)
- myeclipse (2)
- Oracle (1)
- redis (9)
- ibatis (2)
- 架构 (2)
- 解析xml (1)
- autoProxy (0)
- jedis (6)
- http://www.infoq.com/cn/articles/tq-redis-copy-build-scalable-cluster (1)
- xmemcached (1)
- 图片服务器 (1)
- 对象池 (0)
- netty (1)
最新评论
-
laoma102:
已经不好使了,能找到最新的吗
spring官方文档 -
di1984HIT:
不错,。不错~
pig安装
package operateFile;
import java.io.BufferedInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URI;
import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.BlockLocation;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.DistributedFileSystem;
import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.util.Progressable;
public class HDFS_File {
public static void main(String args[]) throws IOException{
HDFS_File fu = new HDFS_File();
// HadoopFileUtil hu = new HadoopFileUtil();
String src = "/hadoop/files02";
String dst = "hdfs://s21:9000/te";
dst = "hdfs://192.168.100.221:9000/user/ad";
String localSrc = "/hadoop/file02";
dst = "hdfs://localhost:9000/user/filess021";
Configuration conf = new Configuration();
Object obj = conf.get("fs.default.name");
System.out.println(obj);
String dst2 = "hdfs://localhost:9000/user/ad";
// fu.putFileFormLocal(conf, src, dst);
// hu.createFile(src, dst);
/* InputStream in = new BufferedInputStream(new FileInputStream(src));
fu.createFileByInputStream(conf,in,dst);*/
// fu.CreateFile(conf, dst);
fu.PutFile(conf, src, dst);
// fu.ReadFile(conf, dst);
// fu.GetFile(conf,dst, src);
// fu.ReNameFile(conf, dst, dst2);
// fu.DelFile(conf, dst2, false);
// fu.GetFileModTime(conf, src);
}
//read the file from HDFS
public void ReadFile(Configuration conf, String FileName){
try{
FileSystem hdfs = FileSystem.get(URI.create(FileName),conf);
FSDataInputStream dis = hdfs.open(new Path(FileName));
IOUtils.copyBytes(dis, System.out, 4096, false);
dis.close();
}catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
//copy the file from HDFS to local
public void GetFile(Configuration conf, String srcFile, String dstFile){
try {
FileSystem hdfs = FileSystem.get(URI.create(srcFile),conf);
Path srcPath = new Path(srcFile);
Path dstPath = new Path(dstFile);
hdfs.copyToLocalFile(false,srcPath, dstPath);
}catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public void createFileByInputStream(Configuration conf, InputStream in , String dstFile) throws IOException{
FileSystem fs = FileSystem.get(URI.create(dstFile), conf);
// fs = FileSystem.get(conf);
/* FileStatus[] fst = fs.listStatus(new Path("/user"));
for(FileStatus f:fst){
System.out.println(f.getPath());
}*/
OutputStream out = fs.create(new Path(dstFile));
/* OutputStream out = fs.create(new Path(dstFile), new Progressable() {
public void progress() {
System.out.print(".");
}
});*/
IOUtils.copyBytes(in, out, 4096, true);
}
public void putFileFormLocal(Configuration conf, String srcFile, String dstFile) throws IOException{
InputStream in = new BufferedInputStream(new FileInputStream(srcFile));
/* FileSystem hdfs = FileSystem.get(conf);
boolean b = hdfs.exists(new Path("/hadoop/hadoop"));
System.out.println(b);*/
FileSystem fs = FileSystem.get(URI.create(dstFile), conf);
// fs = FileSystem.get(conf);
FileStatus[] fst = fs.listStatus(new Path("/user"));
for(FileStatus f:fst){
System.out.println(f.getPath());
}
OutputStream out = fs.create(new Path(dstFile), new Progressable() {
public void progress() {
System.out.print(".");
}
});
IOUtils.copyBytes(in, out, 4096, true);
}
//copy the local file to HDFS
public void PutFile(Configuration conf, String srcFile, String dstFile) throws IOException {
/* try {
FileSystem hdfs = FileSystem.get(conf);
Path srcPath = new Path(srcFile);
Path dstPath = new Path(dstFile);
hdfs.copyFromLocalFile(srcPath, dstPath);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
*/
FileSystem fs = FileSystem.get(URI.create(dstFile),conf);
// Hadoop DFS deals with Path
Path outFile = new Path(dstFile);
// Check if input/output are valid
File inputfile = new File(srcFile);
if(!inputfile.exists()&&!inputfile.isFile())
{
// printAndExit("Input should be a file");
}
// if (fs.exists(outFile))
// printAndExit("Output already exists");
// Read from and write to new file
// FSDataInputStream in = fs.open(inFile);
FSDataOutputStream out = fs.create(outFile);
InputStream image = new FileInputStream(inputfile);
byte buffer[] = new byte[256];
try {
int bytesRead = 0;
while ((bytesRead = image.read(buffer)) > 0) {
out.write(buffer, 0, bytesRead);
}
} catch (IOException e) {
System.out.println("Error while copying file");
} finally {
image.close();
out.close();
}
}
//create the new file
public FSDataOutputStream CreateFile(Configuration conf, String FileName){
try {
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(URI.create(FileName),config);
Path path = new Path(FileName);
FSDataOutputStream outputStream = hdfs.create(path);
return outputStream;
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
}
//rename the file name
public boolean ReNameFile(Configuration conf, String srcName, String dstName){
try {
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(URI.create(srcName),config);
Path fromPath = new Path(srcName);
Path toPath = new Path(dstName);
boolean isRenamed = hdfs.rename(fromPath, toPath);
return isRenamed;
}catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return false;
}
//delete the file
// tyep = true, delete the directory
// type = false, delece the file
public boolean DelFile(Configuration conf, String FileName, boolean type){
try {
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(URI.create(FileName),config);
Path path = new Path(FileName);
boolean isDeleted = hdfs.delete(path, type);
return isDeleted;
}catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return false;
}
//Get HDFS file last modification time
public long GetFileModTime(Configuration conf, String FileName){
try{
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(FileName);
FileStatus fileStatus = hdfs.getFileStatus(path);
long modificationTime = fileStatus.getModificationTime();
System.out.println(modificationTime);
return modificationTime;
}catch(IOException e){
e.printStackTrace();
}
return 0;
}
//checke if a file exists in HDFS
public boolean CheckFileExist(Configuration conf, String FileName){
try{
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(FileName);
boolean isExists = hdfs.exists(path);
return isExists;
}catch(IOException e){
e.printStackTrace();
}
return false;
}
//Get the locations of a file in the HDFS cluster
public List<String []> GetFileBolckHost(Configuration conf, String FileName){
try{
List<String []> list = new ArrayList<String []>();
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(FileName);
FileStatus fileStatus = hdfs.getFileStatus(path);
BlockLocation[] blkLocations = hdfs.getFileBlockLocations(fileStatus, 0, fileStatus.getLen());
int blkCount = blkLocations.length;
for (int i=0; i < blkCount; i++) {
String[] hosts = blkLocations[i].getHosts();
list.add(hosts);
}
return list;
}catch(IOException e){
e.printStackTrace();
}
return null;
}
//Get a list of all the nodes host names in the HDFS cluster
public String[] GetAllNodeName(Configuration conf){
try{
Configuration config = new Configuration();
FileSystem fs = FileSystem.get(config);
DistributedFileSystem hdfs = (DistributedFileSystem) fs;
DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();
String[] names = new String[dataNodeStats.length];
for (int i = 0; i < dataNodeStats.length; i++) {
names[i] = dataNodeStats[i].getHostName();
}
return names;
}catch(IOException e){
e.printStackTrace();
}
return null;
}
}
import java.io.BufferedInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URI;
import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.BlockLocation;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.DistributedFileSystem;
import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.util.Progressable;
public class HDFS_File {
public static void main(String args[]) throws IOException{
HDFS_File fu = new HDFS_File();
// HadoopFileUtil hu = new HadoopFileUtil();
String src = "/hadoop/files02";
String dst = "hdfs://s21:9000/te";
dst = "hdfs://192.168.100.221:9000/user/ad";
String localSrc = "/hadoop/file02";
dst = "hdfs://localhost:9000/user/filess021";
Configuration conf = new Configuration();
Object obj = conf.get("fs.default.name");
System.out.println(obj);
String dst2 = "hdfs://localhost:9000/user/ad";
// fu.putFileFormLocal(conf, src, dst);
// hu.createFile(src, dst);
/* InputStream in = new BufferedInputStream(new FileInputStream(src));
fu.createFileByInputStream(conf,in,dst);*/
// fu.CreateFile(conf, dst);
fu.PutFile(conf, src, dst);
// fu.ReadFile(conf, dst);
// fu.GetFile(conf,dst, src);
// fu.ReNameFile(conf, dst, dst2);
// fu.DelFile(conf, dst2, false);
// fu.GetFileModTime(conf, src);
}
//read the file from HDFS
public void ReadFile(Configuration conf, String FileName){
try{
FileSystem hdfs = FileSystem.get(URI.create(FileName),conf);
FSDataInputStream dis = hdfs.open(new Path(FileName));
IOUtils.copyBytes(dis, System.out, 4096, false);
dis.close();
}catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
//copy the file from HDFS to local
public void GetFile(Configuration conf, String srcFile, String dstFile){
try {
FileSystem hdfs = FileSystem.get(URI.create(srcFile),conf);
Path srcPath = new Path(srcFile);
Path dstPath = new Path(dstFile);
hdfs.copyToLocalFile(false,srcPath, dstPath);
}catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public void createFileByInputStream(Configuration conf, InputStream in , String dstFile) throws IOException{
FileSystem fs = FileSystem.get(URI.create(dstFile), conf);
// fs = FileSystem.get(conf);
/* FileStatus[] fst = fs.listStatus(new Path("/user"));
for(FileStatus f:fst){
System.out.println(f.getPath());
}*/
OutputStream out = fs.create(new Path(dstFile));
/* OutputStream out = fs.create(new Path(dstFile), new Progressable() {
public void progress() {
System.out.print(".");
}
});*/
IOUtils.copyBytes(in, out, 4096, true);
}
public void putFileFormLocal(Configuration conf, String srcFile, String dstFile) throws IOException{
InputStream in = new BufferedInputStream(new FileInputStream(srcFile));
/* FileSystem hdfs = FileSystem.get(conf);
boolean b = hdfs.exists(new Path("/hadoop/hadoop"));
System.out.println(b);*/
FileSystem fs = FileSystem.get(URI.create(dstFile), conf);
// fs = FileSystem.get(conf);
FileStatus[] fst = fs.listStatus(new Path("/user"));
for(FileStatus f:fst){
System.out.println(f.getPath());
}
OutputStream out = fs.create(new Path(dstFile), new Progressable() {
public void progress() {
System.out.print(".");
}
});
IOUtils.copyBytes(in, out, 4096, true);
}
//copy the local file to HDFS
public void PutFile(Configuration conf, String srcFile, String dstFile) throws IOException {
/* try {
FileSystem hdfs = FileSystem.get(conf);
Path srcPath = new Path(srcFile);
Path dstPath = new Path(dstFile);
hdfs.copyFromLocalFile(srcPath, dstPath);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
*/
FileSystem fs = FileSystem.get(URI.create(dstFile),conf);
// Hadoop DFS deals with Path
Path outFile = new Path(dstFile);
// Check if input/output are valid
File inputfile = new File(srcFile);
if(!inputfile.exists()&&!inputfile.isFile())
{
// printAndExit("Input should be a file");
}
// if (fs.exists(outFile))
// printAndExit("Output already exists");
// Read from and write to new file
// FSDataInputStream in = fs.open(inFile);
FSDataOutputStream out = fs.create(outFile);
InputStream image = new FileInputStream(inputfile);
byte buffer[] = new byte[256];
try {
int bytesRead = 0;
while ((bytesRead = image.read(buffer)) > 0) {
out.write(buffer, 0, bytesRead);
}
} catch (IOException e) {
System.out.println("Error while copying file");
} finally {
image.close();
out.close();
}
}
//create the new file
public FSDataOutputStream CreateFile(Configuration conf, String FileName){
try {
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(URI.create(FileName),config);
Path path = new Path(FileName);
FSDataOutputStream outputStream = hdfs.create(path);
return outputStream;
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
}
//rename the file name
public boolean ReNameFile(Configuration conf, String srcName, String dstName){
try {
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(URI.create(srcName),config);
Path fromPath = new Path(srcName);
Path toPath = new Path(dstName);
boolean isRenamed = hdfs.rename(fromPath, toPath);
return isRenamed;
}catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return false;
}
//delete the file
// tyep = true, delete the directory
// type = false, delece the file
public boolean DelFile(Configuration conf, String FileName, boolean type){
try {
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(URI.create(FileName),config);
Path path = new Path(FileName);
boolean isDeleted = hdfs.delete(path, type);
return isDeleted;
}catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return false;
}
//Get HDFS file last modification time
public long GetFileModTime(Configuration conf, String FileName){
try{
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(FileName);
FileStatus fileStatus = hdfs.getFileStatus(path);
long modificationTime = fileStatus.getModificationTime();
System.out.println(modificationTime);
return modificationTime;
}catch(IOException e){
e.printStackTrace();
}
return 0;
}
//checke if a file exists in HDFS
public boolean CheckFileExist(Configuration conf, String FileName){
try{
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(FileName);
boolean isExists = hdfs.exists(path);
return isExists;
}catch(IOException e){
e.printStackTrace();
}
return false;
}
//Get the locations of a file in the HDFS cluster
public List<String []> GetFileBolckHost(Configuration conf, String FileName){
try{
List<String []> list = new ArrayList<String []>();
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(FileName);
FileStatus fileStatus = hdfs.getFileStatus(path);
BlockLocation[] blkLocations = hdfs.getFileBlockLocations(fileStatus, 0, fileStatus.getLen());
int blkCount = blkLocations.length;
for (int i=0; i < blkCount; i++) {
String[] hosts = blkLocations[i].getHosts();
list.add(hosts);
}
return list;
}catch(IOException e){
e.printStackTrace();
}
return null;
}
//Get a list of all the nodes host names in the HDFS cluster
public String[] GetAllNodeName(Configuration conf){
try{
Configuration config = new Configuration();
FileSystem fs = FileSystem.get(config);
DistributedFileSystem hdfs = (DistributedFileSystem) fs;
DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();
String[] names = new String[dataNodeStats.length];
for (int i = 0; i < dataNodeStats.length; i++) {
names[i] = dataNodeStats[i].getHostName();
}
return names;
}catch(IOException e){
e.printStackTrace();
}
return null;
}
}
发表评论
-
mapreduce Bet
2012-04-11 15:00 917import java.io.IOException; imp ... -
hadoop 输出格式
2012-04-05 17:18 722http://blog.csdn.net/dajuezhao/ ... -
hadoop mapreduce 原理
2012-03-31 16:14 677http://www.cnblogs.com/forfutur ... -
hadoop搭建问题
2012-03-30 13:23 802file:///E:/hadoop/搭建/hadoop集群搭建 ... -
hadoop输出文件格式
2012-03-26 10:09 639http://apps.hi.baidu.com/share/ ... -
hadoop 学习
2012-03-26 09:48 636http://hi.baidu.com/shuyan50/bl ... -
hadoop提高性能建议
2012-03-22 22:40 669http://langyu.iteye.com/blog/91 ... -
hadoop例子
2012-03-22 22:09 725http://www.hadoopor.com/thread- ... -
hadoop
2012-04-25 13:16 748精通HADOOP http://blog.csdn.net/ ... -
Hadoop Hive与Hbase整合
2012-03-07 15:02 346http://www.open-open.com/lib/vi ... -
hive hadoop 代码解析
2012-04-25 13:16 772http://www.tbdata.org/archives/ ... -
Hadoop MapReduce操作MySQL
2012-03-05 17:33 886http://www.javabloger.com/artic ... -
hadoop hdfs常用操作类
2012-03-05 10:03 1938import java.io.IOException; ... -
hadoo 文件常用操作
2012-03-02 15:53 747http://www.360doc.com/content/1 ... -
Mapper,Reducer,Wrapper的Java模板
2012-03-02 08:24 1111http://www.easyigloo.org/?p=114 ... -
hadoop基础知识
2012-03-02 08:00 715http://www.blogjava.net/killme2 ... -
hadoop 自己封装的接口
2012-04-25 13:16 677http://www.360doc.com/content/1 ... -
HadoopFileUtil
2012-03-01 14:42 1833import java.io.File; import jav ... -
hadoop ExtendedFileUtil
2012-03-01 14:34 1042在Hadoop编写生产环境的任务时,定义以下任务,要求是相同的 ... -
hadoop StringUtil
2012-03-01 14:33 845import java.util.*; public cla ...
相关推荐
实验二:“熟悉常用的HDFS操作”旨在帮助学习者深入理解Hadoop分布式文件系统(HDFS)在大数据处理中的核心地位,以及如何通过Shell命令和Java API进行高效操作。HDFS在Hadoop架构中扮演着存储大数据的核心角色,为...
在实际编程中,我们通常会创建一个Java类,导入Hadoop的相关库,然后使用FileSystem类和Path类等来执行HDFS操作。例如,使用`FileSystem.get(conf)`获取FileSystem实例,其中conf是包含HDFS配置信息的Configuration...
2. 使用HDFS API:对于Java应用程序,可以使用Hadoop的FSDataOutputStream类,通过创建一个FileSystem实例,然后调用`create()`方法来上传文件。 三、HDFS的文件下载 1. 命令行工具:使用`hadoop fs -get`命令将...
实验内容包括编程实现HDFS操作以及使用Hadoop的Shell命令来完成相同任务。 首先,我们需要理解HDFS的基本操作,如上传文件、追加内容和覆盖文件。这些操作在大数据处理场景中至关重要,因为它们允许我们高效地管理...
### 向HDFS上传Excel文件 #### 背景 在大数据处理场景中,经常需要将Excel文件上传到Hadoop分布式文件系统(HDFS)中进行进一步的数据处理或分析。然而,由于HDFS本身并不直接支持Excel文件格式,通常的做法是先将...
- 创建一个`MyFSDataInputStream`类,继承自`org.apache.hadoop.fs.FSDataInputStream`,实现按行读取HDFS文件的方法`readLine()`。 - 使用`java.net.URL`和`org.apache.hadoop.fs.FsURLStreamHandlerFactory`,编写...
这个是简单的demo 及工具类,关于对于文件操作的。希望能够对大家有所帮助吧
在Eclipse中进行HDFS操作,我们需要使用Hadoop的Java API。首先,确保你的开发环境中已经安装了Hadoop并配置了相关的环境变量,包括HADOOP_HOME和PATH。接下来,创建一个新的Java项目,并引入Hadoop的相关依赖库,...
HDFS 操作详解 大数据实验 2:熟悉常用的 HDFS 操作答案旨在帮助用户熟悉 HDFS 在 Hadoop 体系结构中的角色,并掌握使用 HDFS 操作常用的 Shell 命令和 Java API。 理解 HDFS 在 Hadoop 体系结构中的角色 HDFS...
本次实验的主要目标是通过对HDFS(Hadoop Distributed File System)的操作实践,加深学生对HDFS在Hadoop架构中的作用及其基本操作的理解。实验内容包括两大部分:一是通过Shell命令对HDFS进行基本的文件管理操作;...
8. **异常处理**:在实际编程中,需要适当地处理HDFS操作可能抛出的异常,如`FileNotFoundException`、`IOException`等。 9. **最佳实践**:在使用HDFS API时,遵循最佳实践,如批量操作以减少网络开销,使用缓冲区...
1. **连接HDFS**:首先,我们需要建立一个`Configuration`对象,并使用`FileSystem`类的静态方法`get()`或`getConf()`来获取HDFS的实例。 ```java import org.apache.hadoop.conf.Configuration; import org.apache...
HDFS的Shell操作,bin/hadoop fs 具体命令 OR bin/hdfs dfs 具体命令 dfs是fs的实现类等等。
首先,Java对HDFS的操作主要通过Hadoop的`org.apache.hadoop`包中的类和接口实现。其中,`FileSystem`接口是与文件系统交互的主要入口,`Path`类代表文件或目录路径,`FSDataInputStream`和`FSDataOutputStream`分别...
- **io.serializations**: 指定支持的序列化类,这里使用的是`org.apache.hadoop.io.serializer.WritableSerialization`,即默认的序列化方式。 - **io.file.buffer.size**: 文件IO操作时使用的缓冲区大小,默认为...
Java操作HDFS(Hadoop Distributed File System)是大数据领域中常见的任务,特别是在处理大规模数据时。HDFS是一个分布式文件系统,由Apache Hadoop项目开发,它设计为在廉价硬件上运行,提供高吞吐量的数据访问。...
### HDFS文件读写操作详解 #### 一、HDFS架构概述 HDFS(Hadoop Distributed File System)是Hadoop项目的核心子项目之一,旨在提供高吞吐量的数据访问,适用于大规模数据集上的应用。HDFS采用了Master/Slave的...
本文将详细讲解如何使用Java API来操作HDFS,特别是创建目录的功能。我们将探讨Hadoop的环境配置、HDFS API的使用以及具体创建目录的步骤。 首先,理解Hadoop的环境配置至关重要。在进行Java编程之前,你需要确保...
两种方式都可以用来执行具体的HDFS操作命令。 **2.2 命令大全** HDFS提供了一系列强大的命令用于文件和目录的操作: - `-appendToFile <localsrc> <dst>`:向已存在的HDFS文件中追加本地文件的内容。 - `-cat [-...
以上就是关于HDFSJava操作类HDFSUtil以及JUnit测试的主要内容,它涵盖了HDFS的基础操作和高可用环境的配置,对于在Java应用中集成HDFS操作非常实用。在实际项目中,还需要根据具体需求进行调整和扩展,例如添加数据...