`

Hadoop 安装 + HBase伪分布式安装

 
阅读更多
第一部分,安装Hadoop

一般配置都是在文件尾新增几行配置,也有文件路径需要手动新建。

1 修改主机名
[root@xinyanfei conf]# hostname
xinyanfei

2 配置host

3 ping主机名
[root@xinyanfei conf]# ping xinyanfei
PING xinyanfei (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.036 ms

4 下载Hadoop,并解压
[root@xinyanfei ~]# cd /export/servers/hadoop-1.0.4/conf/

5 查看修改的配置 core-site.xml
[root@xinyanfei conf]# cat core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
                <name>fs.default.name</name>
                <value>hdfs://xinyanfei:9000</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/export/Data/hadoop/tmp</value>
        </property>
</configuration>

6 查看修改的配置 hadoop-env.sh
[root@xinyanfei conf]# cat hadoop-env.sh
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.  Required.
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun

# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=

# The maximum amount of heap to use, in MB. Default is 1000.
# export HADOOP_HEAPSIZE=2000

# Extra Java runtime options.  Empty by default.
# export HADOOP_OPTS=-server

# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"
export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"
export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
# export HADOOP_TASKTRACKER_OPTS=
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
# export HADOOP_CLIENT_OPTS

# Extra ssh options.  Empty by default.
# export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"

# Where log files are stored.  $HADOOP_HOME/logs by default.
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs

# File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by default.
# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves

# host:path where hadoop code should be rsync'd from.  Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop

# Seconds to sleep between slave commands.  Unset by default.  This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HADOOP_SLAVE_SLEEP=0.1

# The directory where pid files are stored. /tmp by default.
# export HADOOP_PID_DIR=/var/hadoop/pids

# A string representing this instance of hadoop. $USER by default.
# export HADOOP_IDENT_STRING=$USER

# The scheduling priority for daemon processes.  See 'man nice'.
# export HADOOP_NICENESS=10

export JAVA_HOME=/export/servers/jdk1.7.0_80/

7 查看修改的配置 hdfs-site.xml

[root@xinyanfei conf]# cat hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
    <property> 
        <name>dfs.permissions</name> 
        <value>false</value> 
    </property>
</configuration>

8 查看修改的配置 mapred-site.xml

[root@xinyanfei conf]# cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property> 
        <name>mapred.job.tracker</name> 
        <value>xinyanfei:9001</value> 
    </property>
</configuration>

9 查看修改的配置 masters 和 slaves文件
[root@xinyanfei conf]# cat masters
localhost
[root@xinyanfei conf]# cat slaves
localhost



10 hadoop namenode -format
[root@localhost bin]# hadoop namenode -format


Warning: $HADOOP_HOME is deprecated.

16/11/17 13:44:59 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.0.4
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
************************************************************/
16/11/17 13:44:59 INFO util.GSet: VM type       = 64-bit
16/11/17 13:44:59 INFO util.GSet: 2% max memory = 17.78 MB
16/11/17 13:44:59 INFO util.GSet: capacity      = 2^21 = 2097152 entries
16/11/17 13:44:59 INFO util.GSet: recommended=2097152, actual=2097152
16/11/17 13:45:00 INFO namenode.FSNamesystem: fsOwner=root
16/11/17 13:45:00 INFO namenode.FSNamesystem: supergroup=supergroup
16/11/17 13:45:00 INFO namenode.FSNamesystem: isPermissionEnabled=false
16/11/17 13:45:00 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
16/11/17 13:45:00 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
16/11/17 13:45:00 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/11/17 13:45:00 INFO common.Storage: Image file of size 110 saved in 0 seconds.
16/11/17 13:45:00 INFO common.Storage: Storage directory /export/Data/hadoop/tmp/dfs/name has been successfully formatted.
16/11/17 13:45:00 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
[root@localhost bin]#

=============================


第二部分,安装Hbase,并配置



1、修改hbase-0.94.18下的conf目录下的配置文件hbase-env.sh和hbase-site.xml

hbase-env.sh修改如下:

export JAVA_HOME=/usr/Java/jdk1.6

export HBASE_CLASSPATH=/usr/hadoop/conf

export HBASE_MANAGES_ZK=true

#Hbase日志目录
export HBASE_LOG_DIR=/root/hadoop/hbase-0.94.6.1/logs

hbase-site.xml修改如下:

<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>

<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
        </property>
</configuration>

完成以上操作,就可以正常启动Hbase了,启动顺序:先启动Hadoop——>再启动Hbase,关闭顺序:先关闭Hbase——>再关闭Hadoop。

启动hbase:

zcf@zcf-K42JZ:/usr/local/hbase$ bin/start-hbase.sh

査看进程jps:

4798 SecondaryNameNode
16790 Jps
4275 NameNode
5154 TaskTracker
16269 HQuorumPeer
4908 JobTracker
16610 HRegionServer
5305
4549 DataNode
16348 HMaster

进入shell模式: bin/hbase shell

HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.18, r1577788, Sat Mar 15 04:46:47 UTC 2014


hbase(main):001:0>

先停止hbase,再停止hadoop。

我们也可以通过WEB页面来管理查看HBase数据库。

HMaster:http://192.168.0.10:60010/master.jsp



注:Hbase默认的hbase.master端口是60000

<property>
<name>hbase.master</name>
<value>192.168.0.10:60000</value>
</property>
如果在配置文件修改了master端口,在用java
api的时候要为configuration指定下xml文件configuration.addResource(new FileInputStream(new File("hbase-site.xml")));,否则会报:org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.io.IOException: Call to master1/172.22.2.170:60000的错误
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics