- 浏览: 300100 次
- 性别:
- 来自: 上海
文章分类
- 全部博客 (298)
- Tomcat (3)
- ZooKeeper (1)
- Maven (11)
- opensource (1)
- DataBase (5)
- UML (8)
- linux (87)
- Java (32)
- 算法 (3)
- Redis (1)
- HBase (2)
- 产品 (1)
- 模板引擎 (1)
- Eclipse (10)
- JUnit (5)
- Log4j (8)
- XML (2)
- JSON (1)
- SpringMVC (23)
- Spring (24)
- TCP/IP (4)
- Windows (10)
- Web Service (1)
- 源码版本管理 (1)
- Word (1)
- Test (1)
- Mybatis (7)
- CentOS (2)
- 多线程 (2)
- Web (7)
- Servlet (3)
- JavaWeb (4)
- MySQL (7)
- 汇编语言 (2)
- linux Shell (4)
- GIT (4)
- Python (1)
- 并发 (4)
- 编程通用 (1)
- JavaScript (1)
- 异常 (3)
- 自动化部署 (1)
- 大数据 (1)
- hive (2)
- 文本编辑器 (2)
- MINA (0)
- intellij IDEA (9)
- masm (0)
- blockchain (1)
- docker (2)
- IDEA (0)
- GO (3)
- nginx (1)
- springBoot (3)
- Websocket (2)
- macOS (1)
最新评论
-
woodding2008:
ss –pl 可以查看监听方式启动的端口以及pid
根据端口查PID,根据PID查进程名称 -
masuweng:
恩很试用,也很常用。
linux 常用命令
第一部分,安装Hadoop
一般配置都是在文件尾新增几行配置,也有文件路径需要手动新建。
1 修改主机名
[root@xinyanfei conf]# hostname
xinyanfei
2 配置host
3 ping主机名
[root@xinyanfei conf]# ping xinyanfei
PING xinyanfei (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.036 ms
4 下载Hadoop,并解压
[root@xinyanfei ~]# cd /export/servers/hadoop-1.0.4/conf/
5 查看修改的配置 core-site.xml
[root@xinyanfei conf]# cat core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://xinyanfei:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/export/Data/hadoop/tmp</value>
</property>
</configuration>
6 查看修改的配置 hadoop-env.sh
[root@xinyanfei conf]# cat hadoop-env.sh
# Set Hadoop-specific environment variables here.
# The only required environment variable is JAVA_HOME. All others are
# optional. When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.
# The java implementation to use. Required.
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun
# Extra Java CLASSPATH elements. Optional.
# export HADOOP_CLASSPATH=
# The maximum amount of heap to use, in MB. Default is 1000.
# export HADOOP_HEAPSIZE=2000
# Extra Java runtime options. Empty by default.
# export HADOOP_OPTS=-server
# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"
export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"
export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
# export HADOOP_TASKTRACKER_OPTS=
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
# export HADOOP_CLIENT_OPTS
# Extra ssh options. Empty by default.
# export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"
# Where log files are stored. $HADOOP_HOME/logs by default.
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
# File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default.
# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
# host:path where hadoop code should be rsync'd from. Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop
# Seconds to sleep between slave commands. Unset by default. This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HADOOP_SLAVE_SLEEP=0.1
# The directory where pid files are stored. /tmp by default.
# export HADOOP_PID_DIR=/var/hadoop/pids
# A string representing this instance of hadoop. $USER by default.
# export HADOOP_IDENT_STRING=$USER
# The scheduling priority for daemon processes. See 'man nice'.
# export HADOOP_NICENESS=10
export JAVA_HOME=/export/servers/jdk1.7.0_80/
7 查看修改的配置 hdfs-site.xml
[root@xinyanfei conf]# cat hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
8 查看修改的配置 mapred-site.xml
[root@xinyanfei conf]# cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>xinyanfei:9001</value>
</property>
</configuration>
9 查看修改的配置 masters 和 slaves文件
[root@xinyanfei conf]# cat masters
localhost
[root@xinyanfei conf]# cat slaves
localhost
10 hadoop namenode -format
[root@localhost bin]# hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.
16/11/17 13:44:59 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
16/11/17 13:44:59 INFO util.GSet: VM type = 64-bit
16/11/17 13:44:59 INFO util.GSet: 2% max memory = 17.78 MB
16/11/17 13:44:59 INFO util.GSet: capacity = 2^21 = 2097152 entries
16/11/17 13:44:59 INFO util.GSet: recommended=2097152, actual=2097152
16/11/17 13:45:00 INFO namenode.FSNamesystem: fsOwner=root
16/11/17 13:45:00 INFO namenode.FSNamesystem: supergroup=supergroup
16/11/17 13:45:00 INFO namenode.FSNamesystem: isPermissionEnabled=false
16/11/17 13:45:00 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
16/11/17 13:45:00 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
16/11/17 13:45:00 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/11/17 13:45:00 INFO common.Storage: Image file of size 110 saved in 0 seconds.
16/11/17 13:45:00 INFO common.Storage: Storage directory /export/Data/hadoop/tmp/dfs/name has been successfully formatted.
16/11/17 13:45:00 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
[root@localhost bin]#
=============================
第二部分,安装Hbase,并配置
1、修改hbase-0.94.18下的conf目录下的配置文件hbase-env.sh和hbase-site.xml
hbase-env.sh修改如下:
export JAVA_HOME=/usr/Java/jdk1.6
export HBASE_CLASSPATH=/usr/hadoop/conf
export HBASE_MANAGES_ZK=true
#Hbase日志目录
export HBASE_LOG_DIR=/root/hadoop/hbase-0.94.6.1/logs
hbase-site.xml修改如下:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
完成以上操作,就可以正常启动Hbase了,启动顺序:先启动Hadoop——>再启动Hbase,关闭顺序:先关闭Hbase——>再关闭Hadoop。
启动hbase:
zcf@zcf-K42JZ:/usr/local/hbase$ bin/start-hbase.sh
査看进程jps:
4798 SecondaryNameNode
16790 Jps
4275 NameNode
5154 TaskTracker
16269 HQuorumPeer
4908 JobTracker
16610 HRegionServer
5305
4549 DataNode
16348 HMaster
进入shell模式: bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.18, r1577788, Sat Mar 15 04:46:47 UTC 2014
hbase(main):001:0>
先停止hbase,再停止hadoop。
我们也可以通过WEB页面来管理查看HBase数据库。
HMaster:http://192.168.0.10:60010/master.jsp
注:Hbase默认的hbase.master端口是60000
<property>
<name>hbase.master</name>
<value>192.168.0.10:60000</value>
</property>
如果在配置文件修改了master端口,在用java
api的时候要为configuration指定下xml文件configuration.addResource(new FileInputStream(new File("hbase-site.xml")));,否则会报:org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.io.IOException: Call to master1/172.22.2.170:60000的错误
一般配置都是在文件尾新增几行配置,也有文件路径需要手动新建。
1 修改主机名
[root@xinyanfei conf]# hostname
xinyanfei
2 配置host
3 ping主机名
[root@xinyanfei conf]# ping xinyanfei
PING xinyanfei (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.036 ms
4 下载Hadoop,并解压
[root@xinyanfei ~]# cd /export/servers/hadoop-1.0.4/conf/
5 查看修改的配置 core-site.xml
[root@xinyanfei conf]# cat core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://xinyanfei:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/export/Data/hadoop/tmp</value>
</property>
</configuration>
6 查看修改的配置 hadoop-env.sh
[root@xinyanfei conf]# cat hadoop-env.sh
# Set Hadoop-specific environment variables here.
# The only required environment variable is JAVA_HOME. All others are
# optional. When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.
# The java implementation to use. Required.
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun
# Extra Java CLASSPATH elements. Optional.
# export HADOOP_CLASSPATH=
# The maximum amount of heap to use, in MB. Default is 1000.
# export HADOOP_HEAPSIZE=2000
# Extra Java runtime options. Empty by default.
# export HADOOP_OPTS=-server
# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"
export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"
export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
# export HADOOP_TASKTRACKER_OPTS=
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
# export HADOOP_CLIENT_OPTS
# Extra ssh options. Empty by default.
# export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"
# Where log files are stored. $HADOOP_HOME/logs by default.
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
# File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default.
# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
# host:path where hadoop code should be rsync'd from. Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop
# Seconds to sleep between slave commands. Unset by default. This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HADOOP_SLAVE_SLEEP=0.1
# The directory where pid files are stored. /tmp by default.
# export HADOOP_PID_DIR=/var/hadoop/pids
# A string representing this instance of hadoop. $USER by default.
# export HADOOP_IDENT_STRING=$USER
# The scheduling priority for daemon processes. See 'man nice'.
# export HADOOP_NICENESS=10
export JAVA_HOME=/export/servers/jdk1.7.0_80/
7 查看修改的配置 hdfs-site.xml
[root@xinyanfei conf]# cat hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
8 查看修改的配置 mapred-site.xml
[root@xinyanfei conf]# cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>xinyanfei:9001</value>
</property>
</configuration>
9 查看修改的配置 masters 和 slaves文件
[root@xinyanfei conf]# cat masters
localhost
[root@xinyanfei conf]# cat slaves
localhost
10 hadoop namenode -format
[root@localhost bin]# hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.
16/11/17 13:44:59 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
16/11/17 13:44:59 INFO util.GSet: VM type = 64-bit
16/11/17 13:44:59 INFO util.GSet: 2% max memory = 17.78 MB
16/11/17 13:44:59 INFO util.GSet: capacity = 2^21 = 2097152 entries
16/11/17 13:44:59 INFO util.GSet: recommended=2097152, actual=2097152
16/11/17 13:45:00 INFO namenode.FSNamesystem: fsOwner=root
16/11/17 13:45:00 INFO namenode.FSNamesystem: supergroup=supergroup
16/11/17 13:45:00 INFO namenode.FSNamesystem: isPermissionEnabled=false
16/11/17 13:45:00 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
16/11/17 13:45:00 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
16/11/17 13:45:00 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/11/17 13:45:00 INFO common.Storage: Image file of size 110 saved in 0 seconds.
16/11/17 13:45:00 INFO common.Storage: Storage directory /export/Data/hadoop/tmp/dfs/name has been successfully formatted.
16/11/17 13:45:00 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
[root@localhost bin]#
=============================
第二部分,安装Hbase,并配置
1、修改hbase-0.94.18下的conf目录下的配置文件hbase-env.sh和hbase-site.xml
hbase-env.sh修改如下:
export JAVA_HOME=/usr/Java/jdk1.6
export HBASE_CLASSPATH=/usr/hadoop/conf
export HBASE_MANAGES_ZK=true
#Hbase日志目录
export HBASE_LOG_DIR=/root/hadoop/hbase-0.94.6.1/logs
hbase-site.xml修改如下:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
完成以上操作,就可以正常启动Hbase了,启动顺序:先启动Hadoop——>再启动Hbase,关闭顺序:先关闭Hbase——>再关闭Hadoop。
启动hbase:
zcf@zcf-K42JZ:/usr/local/hbase$ bin/start-hbase.sh
査看进程jps:
4798 SecondaryNameNode
16790 Jps
4275 NameNode
5154 TaskTracker
16269 HQuorumPeer
4908 JobTracker
16610 HRegionServer
5305
4549 DataNode
16348 HMaster
进入shell模式: bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.18, r1577788, Sat Mar 15 04:46:47 UTC 2014
hbase(main):001:0>
先停止hbase,再停止hadoop。
我们也可以通过WEB页面来管理查看HBase数据库。
HMaster:http://192.168.0.10:60010/master.jsp
注:Hbase默认的hbase.master端口是60000
<property>
<name>hbase.master</name>
<value>192.168.0.10:60000</value>
</property>
如果在配置文件修改了master端口,在用java
api的时候要为configuration指定下xml文件configuration.addResource(new FileInputStream(new File("hbase-site.xml")));,否则会报:org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.io.IOException: Call to master1/172.22.2.170:60000的错误
发表评论
-
libssl.so.10: cannot open shared object file: No such file or directory
2018-08-15 14:49 4357yum 安装不管用了,先执 ... -
sed 字符串替换
2018-04-03 19:15 828https://www.cnblogs.com/linux- ... -
连接到Hyperledger的docker容器内部
2018-03-12 21:02 899=============================== ... -
apt-get常用命令及工作原理
2018-03-12 20:17 540http://blog.csdn.net/mosquito_z ... -
Linux Shell 通配符、转义字符、元字符、特殊字符
2017-01-13 18:50 1726一、Linux shell通配符(wildcard) 通配 ... -
Linux单机TCP并发连接
2016-12-28 14:11 949http://blog.csdn.net/kobejayand ... -
单机最大tcp连接数
2016-12-28 13:50 562from: http://www.cnblogs.com/my ... -
linux后台运行和关闭、查看后台任务
2016-12-15 17:09 705from: http://www.cnblogs.com/k ... -
sh脚本异常:/bin/sh^M:bad interpreter: No such file or directory
2016-12-15 17:07 439from http://myswirl.blog.163 ... -
Shell 脚本
2016-12-12 15:22 8421 如何在shell脚本中判断文件或者文件夹是否存在? if ... -
CentOS7 安装python 命令 : yum install python
2016-12-09 17:53 910CentOS7 安装python 命令 : yum insta ... -
linux 目录下的文件个数
2016-12-07 12:44 487linux里没有直接的命令来展示一个目录下的文件个数,可以通过 ... -
grep -v grep
2016-12-06 11:18 1148grep -v <**> <filename ... -
IT技术学习指导之Linux系统入门的4个阶段
2016-12-05 22:36 521http://www.cnbeta.com/articles ... -
Linux 命令參數帶&符合,需要轉義 \
2016-12-04 21:38 456比如新建文件夾 aaa&bbb 命令 mkd ... -
Linux Shell编程中的几个特殊符号命令 & 、&& 、 ||
2016-12-04 21:35 822一、& 放在启动参数后面表示设置此进程为后台进程 ... -
CentOS 7.0 安装中文输入法
2016-12-04 00:33 545安装的时候没有设置,现在找到之后记录下: (我这个是 ... -
Linux下常用压缩格式的压缩与解压方法
2016-12-02 22:25 494日期:2005-01-20 来源: LinuxByte ... -
Shell脚本8种字符串截取方法总结
2016-12-02 19:56 484这篇文章主要介绍了Shell脚本8种字符串截取方法总结,每个方 ... -
CentOS 7自动以root身份登录gnome桌面
2016-11-29 18:31 2207from: http://blog.csdn.net/zd ...
相关推荐
这个过程涉及了虚拟化技术、分布式系统、Hadoop和HBase的安装配置、远程调试等多个环节,每个环节都需要细心操作,确保所有配置正确无误,才能实现Windows7下通过Eclipse对Fedora虚拟机中Hadoop+hBase伪分布式的有效...
基于hadoop+hbase+springboot实现分布式网盘系统源码+数据集+详细文档(高分毕业设计).zip基于hadoop+hbase+springboot实现分布式网盘系统源码+数据集+详细文档(高分毕业设计).zip基于hadoop+hbase+springboot...
安装HBase需要下载hbase-0.96.2-hadoop2-bin.tar.gz安装包,然后解压缩并配置HBase。 集群环境搭建 搭建Hadoop2.2+Zookeeper3.4.5+HBase0.96集群环境需要完成以下步骤: 1. 安装和配置Hadoop2.2 2. 安装和配置...
jdk1.8.0_131、apache-zookeeper-3.8.0、hadoop-3.3.2、hbase-2.4.12 mysql5.7.38、mysql jdbc驱动mysql-connector-java-8.0.8-dmr-bin.jar、 apache-hive-3.1.3 2.本文软件均安装在自建的目录/export/server/下 ...
在构建分布式网盘系统时,通常会涉及到多个技术栈,如大数据处理框架Hadoop、分布式数据库HBase以及微服务开发框架Spring Boot。本项目“基于hadoop+hbase+springboot实现分布式网盘系统”旨在利用这些技术搭建一个...
HBase是一个分布式的、面向列的开源数据库,运行在Hadoop之上,适合存储半结构化数据。安装HBase时,需要考虑集群的Zookeeper配置,因为Zookeeper用于协调HBase的各个组件。 Oozie是Hadoop的工作流调度器,用于管理...
Hbase是基于Hadoop的分布式NoSQL数据库,提供了高效的数据存储和检索能力。Spark是基于内存的数据处理引擎,能够快速处理大规模数据。Hive是基于Hadoop的数据仓库工具,提供了高效的数据处理和分析能力。本文档旨在...
接下来是HBase,一个基于Hadoop的分布式数据库,适用于半结构化数据的存储。HBase2.1.0提供了更好的性能和稳定性。配置HBase集群需要: 1. 设置Hadoop依赖:在`hbase-site.xml`中指定Hadoop的配置目录。 2. 配置...
### hadoop2.2+hbase0.96+hive0.12安装整合详细高可靠文档及经验总结 #### 一、Hadoop2.2的安装 **问题导读:** 1. Hadoop的安装需要安装哪些软件? 2. Hadoop与HBase整合需要注意哪些问题? 3. Hive与HBase的...
总结来说,搭建Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6的分布式集群环境是一项复杂但关键的任务,它涉及多个组件的安装、配置和集成。通过这个过程,你可以掌握大数据处理的基础架构,并为后续的大数据应用开发打下...
本文将详细介绍如何在本地环境中进行Hadoop2.7.5与HBase1.2.6的伪分布式安装,这是一个适合初学者入门的实践教程。 Hadoop是Apache基金会的一个开源项目,主要由HDFS(Hadoop Distributed File System)和MapReduce...
- 在Hadoop集群运行的基础上安装HBase,确保Hadoop的相关环境变量被HBase使用。 - 配置HBase的`hbase-site.xml`以指定Zookeeper地址和集群模式。 - 启动HBase服务,包括Master和RegionServer。 3. **Spark**:...
[Doker+HBASE+HADOOP+Zookeeper]全分布式环境搭建