環(huán)境準(zhǔn)備 hadoop1 192.168.1.112 hadoop2 192.168.1.113 hadoop3 192.168.1.114
hadoop4 192.168.1.115 scp ~/.ssh/authorized_keys hadoop@192.168.1.113:/home/hadoop/.ssh/authorized_keys scp ~/.ssh/authorized_keys hadoop@192.168.1.114:/home/hadoop/.ssh/authorized_keys
scp ~/.ssh/authorized_keys hadoop@192.168.1.115:/home/hadoop/.ssh/authorized_keys
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop1:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/hadoopp-2.2.0/mytmp</value> <description>A base for other temporarydirectories.</description> </property> <property> <name>hadoop.proxyuser.root.hosts</name> <value>hadoop1</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property> </configuration>
e) hdfs-site.xml <configuration> <property> <name>dfs.namenode.name.dir</name> <value>/home/hadoop/name</value> <final>true</final> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/hadoop/data</value> <final>true</final> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> </configuration>
f) mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop1:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop1:19888</value> </property> <property> <name>mapreduce.jobhistory.intermediate-done-dir</name> <value>/mr-history/tmp</value> </property> <property> <name>mapreduce.jobhistory.done-dir</name> <value>/mr-history/done</value> </property> </configuration>
g) yarn-site.xml <configuration> <property> <name>yarn.resourcemanager.address</name> <value>hadoop1:18040</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>hadoop1:18030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>hadoop1:18025</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>hadoop1:18041</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>hadoop1:8088</value> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>/home/hadoop/mynode/my</value> </property> <property> <name>yarn.nodemanager.log-dirs</name> <value>/home/hadoop/mynode/logs</value> </property> <property> <name>yarn.nodemanager.log.retain-seconds</name> <value>10800</value> </property> <property> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/logs</value> </property> <property> <name>yarn.nodemanager.remote-app-log-dir-suffix</name> <value>logs</value> </property> <property> <name>yarn.log-aggregation.retain-seconds</name> <value>-1</value> </property> <property> <name>yarn.log-aggregation.retain-check-interval-seconds</name> <value>-1</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
3) 將上述文件配置好后,將hadoop-2.2.0文件復(fù)制到其余datanode機(jī)器上的相同路徑下 #hadoop variable settings export HADOOP_HOME=/home/hadoop/hadoop-2.2.0 export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_YARN_HOME=$HADOOP_HOME export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native Hadoop 2.2.0 - warning: You have loaded library /home/hadoop/2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard.
配置完成之后需要重啟電腦,所有datanode節(jié)點(diǎn)也需要對(duì)環(huán)境變量增加上面配置,配置完成之后重啟電腦 啟動(dòng)hadoop后,在瀏覽器中輸入地址查看 http://hadoop1:50070 http://hadoop1:8088 http://hadoop1:19888
|
|
來(lái)自: italyfiori > 《LINUX》