小男孩‘自慰网亚洲一区二区,亚洲一级在线播放毛片,亚洲中文字幕av每天更新,黄aⅴ永久免费无码,91成人午夜在线精品,色网站免费在线观看,亚洲欧洲wwwww在线观看

分享

centos6下安裝部署hadoop2.2

 italyfiori 2014-10-09

  環(huán)境準(zhǔn)備
1、操作系統(tǒng):centos6.0 64位
2、hadoop版本:hahadoop-2.2.0

安裝和配置步驟具體如下:
1、主機(jī)和ip分配如下
     ip地址                  主機(jī)名               用途
     192.168.1.112      hadoop1            namenode
     192.168.1.113      hadoop2            datanode
     192.168.1.114      hadoop3            datanode
     192.168.1.115      hadoop4            datanode
2、四臺(tái)主機(jī)主機(jī)名修改如下,僅以hadoop1為例
    1) [root@hadoop1 ~]# hostname hadoop1
    2) [root@hadoop1 ~]# vi /etc/sysconfig/network,修改hostname屬性值保存
3、四臺(tái)主機(jī)都需要安裝jdk
4、關(guān)閉防火墻,切換到root用戶  執(zhí)行如下命令 chkconfig iptables off
5、每臺(tái)主機(jī)配置/etc/hosts 增加ip地址解析
   

    hadoop1 192.168.1.112

    hadoop2 192.168.1.113

    hadoop3 192.168.1.114

    hadoop4 192.168.1.115
 5、配置namenode無(wú)密碼訪問(wèn)datanode
   1) 在namenode機(jī)器上,在hadoop用戶下執(zhí)行下面命令
     ssh-keygen -t rsa
     遇到所有選項(xiàng)回車默認(rèn)值即可
   2) 導(dǎo)入公鑰到本機(jī)認(rèn)證文件
      cat ~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys
   3) 導(dǎo)入公鑰到其他datanode節(jié)點(diǎn)認(rèn)證文件
    

    scp ~/.ssh/authorized_keys hadoop@192.168.1.113:/home/hadoop/.ssh/authorized_keys

    scp ~/.ssh/authorized_keys hadoop@192.168.1.114:/home/hadoop/.ssh/authorized_keys

    scp ~/.ssh/authorized_keys hadoop@192.168.1.115:/home/hadoop/.ssh/authorized_keys
    以上過(guò)程由于是第一次傳輸訪問(wèn),系統(tǒng)會(huì)提示輸入hadoop用戶的密碼,輸入密碼即可。
     4) 驗(yàn)證是否能夠無(wú)密碼登陸到datanode節(jié)點(diǎn)
      [hadoop@hadoop1 ~]$ ssh 192.168.1.113
      如果沒(méi)有密碼提示直接登陸顯示如下
        [hadoop@hadoop2 ~]$
       無(wú)密碼登陸驗(yàn)證通過(guò),訪問(wèn)其他datanode節(jié)點(diǎn)類似
6、安裝hadoop2.2
   1)解壓縮hadoop-2.2.0.tar.gz
       tar -zxf hadoop-2.2.0.tar.gz
       默認(rèn)解壓縮到當(dāng)前目錄下面,這里解壓縮/home/hadoop/目錄下面
   2) 修改hadoop配置文件
       打開(kāi)hadoop-2.2.0/etc/hadoop,修改里面的配置文件


      a) hadoop-env.sh,找到里面的JAVA_HOME,修改為實(shí)際地址
      b) yarn-env.sh ,同樣找到里面的JAVA_HOME,修改為實(shí)際路徑
      c) slave  這個(gè)文件配置所有datanode節(jié)點(diǎn),以便namenode搜索,本例配置如下
         hadoop2
         hadoop3
         hadoop4
      d) core-site.xml
       

    <configuration>

        <property>

            <name>fs.defaultFS</name>

            <value>hdfs://hadoop1:9000</value>

        </property>

        <property>

            <name>io.file.buffer.size</name>

            <value>131072</value>

        </property>

        <property>

            <name>hadoop.tmp.dir</name>

            <value>/home/hadoop/hadoopp-2.2.0/mytmp</value>

            <description>A base for other temporarydirectories.</description>

        </property>

        <property>

            <name>hadoop.proxyuser.root.hosts</name>

            <value>hadoop1</value>

        </property>

        <property>

            <name>hadoop.proxyuser.root.groups</name>

            <value>*</value>

        </property>

    </configuration>

   e) hdfs-site.xml
     

    <configuration>

        <property>

            <name>dfs.namenode.name.dir</name>

            <value>/home/hadoop/name</value>

        <final>true</final>

        </property>

        <property>

            <name>dfs.datanode.data.dir</name>

            <value>/home/hadoop/data</value>

        <final>true</final>

        </property>

        <property>

            <name>dfs.replication</name>

        <value>3</value>

        </property>

        <property>

            <name>dfs.permissions</name>

            <value>false</value>

        </property>

    </configuration>

  f) mapred-site.xml
  

    <configuration>

        <property>

            <name>mapreduce.framework.name</name>

            <value>yarn</value>

        </property>

        <property>

            <name>mapreduce.jobhistory.address</name>

            <value>hadoop1:10020</value>

        </property>

        <property>

            <name>mapreduce.jobhistory.webapp.address</name>

            <value>hadoop1:19888</value>

        </property>

        <property>

                <name>mapreduce.jobhistory.intermediate-done-dir</name>

                <value>/mr-history/tmp</value>

        </property>

        <property>

            <name>mapreduce.jobhistory.done-dir</name>

            <value>/mr-history/done</value>

        </property>

    </configuration>

 g) yarn-site.xml
   

    <configuration>

        <property>

        <name>yarn.resourcemanager.address</name>

        <value>hadoop1:18040</value>

    </property>

    <property>

        <name>yarn.resourcemanager.scheduler.address</name>

        <value>hadoop1:18030</value>

    </property>

    <property>

        <name>yarn.resourcemanager.resource-tracker.address</name>

        <value>hadoop1:18025</value>

    </property>

    <property>

            <name>yarn.resourcemanager.admin.address</name>

            <value>hadoop1:18041</value>

     </property>

     <property>

            <name>yarn.resourcemanager.webapp.address</name>

            <value>hadoop1:8088</value>

    </property>

    <property>

        <name>yarn.nodemanager.local-dirs</name>

        <value>/home/hadoop/mynode/my</value>

    </property>

    <property>

            <name>yarn.nodemanager.log-dirs</name>

            <value>/home/hadoop/mynode/logs</value>

    </property>

    <property>

            <name>yarn.nodemanager.log.retain-seconds</name>

            <value>10800</value>

      </property>

        <property>

            <name>yarn.nodemanager.remote-app-log-dir</name>

            <value>/logs</value>

        </property>

        <property>

            <name>yarn.nodemanager.remote-app-log-dir-suffix</name>

            <value>logs</value>

        </property>

        <property>

            <name>yarn.log-aggregation.retain-seconds</name>

            <value>-1</value>

        </property>

        <property>

            <name>yarn.log-aggregation.retain-check-interval-seconds</name>

            <value>-1</value>

        </property>

        <property>

            <name>yarn.nodemanager.aux-services</name>

            <value>mapreduce_shuffle</value>

        </property>

    </configuration>

  3) 將上述文件配置好后,將hadoop-2.2.0文件復(fù)制到其余datanode機(jī)器上的相同路徑下
    修改/etc/profile文件 設(shè)置hadoop環(huán)境變量,切換到root用戶
     在文件的最后增加如下配置
    

    #hadoop variable settings

    export HADOOP_HOME=/home/hadoop/hadoop-2.2.0

    export HADOOP_COMMON_HOME=$HADOOP_HOME

    export HADOOP_HDFS_HOME=$HADOOP_HOME

    export HADOOP_MAPRED_HOME=$HADOOP_HOME

    export HADOOP_YARN_HOME=$HADOOP_HOME

    export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

    export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib
   

    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
    增加之后保存
    最后兩行特殊說(shuō)明下,有的文章中遺漏掉這部分配置,最后在啟動(dòng)hadoop2.2時(shí)報(bào)了下面的錯(cuò)誤
  

Hadoop 2.2.0 - warning: You have loaded library /home/hadoop/2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard.


 

Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/hadoop/2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
localhost]
sed: -e expression #1, char 6: unknown option to `s'
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known
64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
Java: ssh: Could not resolve hostname Java: Name or service not known
Server: ssh: Could not resolve hostname Server: Name or service not known
VM: ssh: Could not resolve hostname VM: Name or service not known

 配置完成之后需要重啟電腦,所有datanode節(jié)點(diǎn)也需要對(duì)環(huán)境變量增加上面配置,配置完成之后重啟電腦
7、hadoop的啟動(dòng)與關(guān)閉
   1)hadoop namenode的初始化,只需要第一次的時(shí)候初始化,之后就不需要了
       cd /home/hadoop/hadoop-2.2.0/bin
        hdfs namenode -format
   2)啟動(dòng)
       啟動(dòng):在namenode機(jī)器上,進(jìn)入/home/myhadoop/sbin
       執(zhí)行腳本start-all.sh
   3) 關(guān)閉
       在namenode節(jié)點(diǎn)上,進(jìn)入/home/myhadoop/sbin
      stop-all.sh
8、web接口地址

啟動(dòng)hadoop后,在瀏覽器中輸入地址查看

http://hadoop1:50070

http://hadoop1:8088

http://hadoop1:19888





 

    

    本站是提供個(gè)人知識(shí)管理的網(wǎng)絡(luò)存儲(chǔ)空間,所有內(nèi)容均由用戶發(fā)布,不代表本站觀點(diǎn)。請(qǐng)注意甄別內(nèi)容中的聯(lián)系方式、誘導(dǎo)購(gòu)買等信息,謹(jǐn)防詐騙。如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請(qǐng)點(diǎn)擊一鍵舉報(bào)。
    轉(zhuǎn)藏 分享 獻(xiàn)花(0

    0條評(píng)論

    發(fā)表

    請(qǐng)遵守用戶 評(píng)論公約

    類似文章 更多