메뉴 건너뛰기

Cloudera, BigData, Semantic IoT, Hadoop, NoSQL

Cloudera CDH/CDP 및 Hadoop EcoSystem, Semantic IoT등의 개발/운영 기술을 정리합니다. gooper@gooper.com로 문의 주세요.


0. 모든 설치는 root로 하고 각각의 계정(예, hadoop)이 mapreduce job을 실행할 수 있도록 설정함..

0-1. root 패스워드 변경 : passwd root
 
0-2. hostname 수정 및 반영 : 수정 => vi /etc/hostname, 반영(서버 재부팅 필요없음) => /bin/hostname -F /etc/hostname

 
0-3. 구성목표
 
   master에 ResourceManager, NameNode, JobHistoryServer, DFSZKFailOverController
 
   node1에 ResourceManager, NameNode(standby), JournalNode, DFSZKFailOverController, NodeManager, DataNode
 
   node2에 JournalNode, NodeManager, DataNode
 
   node3에 JournalNode, NodeManager, DataNode
 
   node4에 NodeManager, DataNode
 
0-4 /tmp폴더위치 변경(root로 실행, 모든 노드에 실행함, 저장공간이 작을 뿐더러 속도도 느리므로 변경함)
 
 - 외장HDD를 mount(https://www.gooper.com/ss/index.php?mid=bigdata&category=2772&document_srl=2984)한 /data폴더밑에 tmp폴더를 생성한다.
 
 - chmod 777 /data/tmp로 모두가 사용할 수 있도록 한다.
 
 - /etc/environment파일에 TEMP=/data/tmp를 추가하고 reboot한다.
 

0-5 메모리 설정값 계산
 
root@master:/data/home/hadoop/hdp_manual_install_rpm_helper_files-2.0.6.101/scripts# python yarn-utils.py -c 2 -m 1 -d 1 -k True
 
Using cores=2 memory=1GB disks=1 hbase=True
 Profile: cores=2 memory=2048MB reserved=0GB usableMem=-1GB disks=1
 Num Container=3
 Container Ram=682MB
 Used Ram=1GB
 Unused Ram=0GB
 yarn.scheduler.minimum-allocation-mb=682
 yarn.scheduler.maximum-allocation-mb=2046
 yarn.nodemanager.resource.memory-mb=2046
 mapreduce.map.memory.mb=682
 mapreduce.map.java.opts=-Xmx545m
 mapreduce.reduce.memory.mb=1364
 mapreduce.reduce.java.opts=-Xmx1091m
 yarn.app.mapreduce.am.resource.mb=1364
 yarn.app.mapreduce.am.command-opts=-Xmx1091m
 mapreduce.task.io.sort.mb=272

 
 

 1. 네트웍설정 (공유기에 8포트 기가비트 스위치허브를 물려서 사용하는경우) -root로 실행

-/etc/network/interfaces파일을 아래와 같이 수정함
 
auto lo
 iface lo inet loopback
         #auto eth0
         #iface eth0 inet dhcp
         auto eth0
         iface eth0 inet static
 address 192.168.10.100
 netmask 255.255.255.0
 gateway 192.168.10.1
 #broadcast 192.168.10.1


- /etc/resolvconf/resolv.conf.d/base 의 수정
 
예전에는 /etc/resolv.conf 를 수정했으나 이 파일이 이제 서버가 리스타트 될 때마다 리셋이 된다.

/etc/resolvconf/resolv.conf.d/base 를 열어서
nameserver 168.126.63.1

nameserver 168.126.63.2
를 추가해주고

$ sudo resolvconf -u

로 새로 만들어 주면 된다. 이 작업은 모든 서버에서 실행시켜준다.

*네트웍 설정변경 반영 : /etc/init.d/networking restart
 
2. 계정(hadoop)생성및 password설정(root로 실행)

가. adduser hadoop
 
나. passwd hadoop
 
다. 사용자의 home디렉토리 변경
 
  (일반 서버는 디폴트인 /home/hadoop을 이용하여도 되나, 바나나 파이는 /home은 용량이 작아서 별도의 storage에 home폴더를 지정하여 계정별 폴더에서 각각이 작업이 이루어질수 있도록 구조를 잡고 hadoop관련 conf파일에 사용자계정$(user.name}을 넣음)
 
- root@master:/home/hadoop# mkdir /data/home/hadoop

- root@master:/home/hadoop# chown hadoop /data/home/hadoop
 - root@master:/home/hadoop# usermod -d /data/home/hadoop hadoop


 * home디렉토리 변경하지 말고 설정완료후 https://www.gooper.com/ss/index.php?mid=bigdata&category=2772&document_srl=3048을 참조하여 외장하드에서 부팅하도록 설정해준다.
 
라. hadoop 계정을 sudoers에 등록함
 
- root로 사용자 전환 : su - root
 - /etc/sudoers의 파일 permission 변경 : chmod u+w /etc/sudoers
 - /etc/sudoers에 사용자 등록(hadoop)
 -   => # User privilege specification부분에 추가함
- /etc/sudoers 퍼미션 원복 : chmod u-w /etc/sudoers


3. arm용 jdk 다운로드/설치(root로 실행)

http://www.oracle.com/technetwork/java/javase/downloads/jdk7-arm-downloads-2187468.html 에서
 
Linux ARM v6/v7 Hard Float ABI 67.79 MB    jdk-7u60-linux-arm-vfp-hflt.tar.gz
 

*JDK설치 :
 
가. 압축풀기 : root@master:/tmp#tar zxvf jdk-7u60-linux-arm-vfp-hflt.tar.gz
 
나. /usr/local로 옮기기 : mv jdk1.7.0_60/ /usr/local/
 
다. 심볼릭 링크 걸기 : ln -s jdk1.7.0_60/ jdk
 
라. /etc/profile수정 : vi /etc/profile 하고 상단에 아래 내용을 넣음(전체 적용됨)
 
export JAVA_HOME=/usr/local/jdk
 export PATH="$JAVA_HOME/bin:$PATH"
 export CLASSPATH=".:$JAVA_HOME/jre/lib/ext:$JAVA_HOME/bin/tools.jar"
 export CATALINA_OPTS="Djava.awt.headless=true"

마. profile 적용 : source /etc/profile
  

바. servlet-api.jar설정 : https://www.gooper.com/ss/index.php?mid=bigdata&category=2813&document_srl=3195

사. 확인 : java -version, javac -version
 
* YARN다운로드 :
 
root@master:/tmp# wget http://apache.mirror.cdnetworks.com/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
 --2015-04-25 00:29:27--  http://apache.mirror.cdnetworks.com/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
 Resolving apache.mirror.cdnetworks.com (apache.mirror.cdnetworks.com)... 14.0.101.165
 Connecting to apache.mirror.cdnetworks.com (apache.mirror.cdnetworks.com)|14.0.101.165|:80... connected.
 HTTP request sent, awaiting response... 200 OK
 Length: 195257604 (186M) [application/x-gzip]
 Saving to: 'hadoop-2.6.0.tar.gz'
 
100%[===========================================================================================================================================>] 195,257,604 10.6MB/s   in 21s  
 
2015-04-25 00:29:47 (9.06 MB/s) - 'hadoop-2.6.0.tar.gz' saved [195257604/195257604]
 

* YARN설치 :
 
가. hadoop 압축풀기 : root@master:/tmp# tar xvfz hadoop-2.6.0.tar.gz
 
나. /usr/local/로 mv : root@master:/tmp# mv hadoop-2.6.0 /usr/local
 
다. 심볼릭 링크 생성 : root@master:/usr/local# ln -s hadoop-2.6.0/ hadoop
 
라. 환경설정파일 수정 : root@master:/usr/local# vi /etc/hosts
 
127.0.0.1       localhost
 #127.0.1.1      lemaker
 192.168.10.100  master
 192.168.10.101  node1
 192.168.10.102  node2
 192.168.10.103  node3
 192.168.10.104  node4
 # The following lines are desirable for IPv6 capable hosts
 ::1     ip6-localhost ip6-loopback
 fe00::0 ip6-localnet
 ff00::0 ip6-mcastprefix
 ff02::1 ip6-allnodes
 ff02::2 ip6-allrouters

마. /etc/profile수정
 
export HOME=/usr/local

#java Setting
 export JAVA_HOME=$HOME/jdk
 export PATH=$JAVA_HOME/bin:$PATH
 export CLASSPATH=$JAVA_HOME/lib:$CLASSPATH

 # Hadoop Path
 export HADOOP_PREFIX=$HOME/hadoop
 export PATH=$PATH:$HADOOP_PREFIX/bin
 export HADOOP_HOME=$HOME/hadoop
 export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
 export HADOOP_COMMON_HOME=${HADOOP_PREFIX}
 export HADOOP_HDFS_HOME=${HADOOP_PREFIX}
 export YARN_HOME=${HADOOP_PREFIX}
 export HADOOP_YARN_HOME=${HADOOP_PREFIX}
 export HADOOP_CONF_DIR=${HADOOP_PREFIX}/etc/hadoop

 # Native Path
 export HADOOP_COMMON_LIB_NATIVE_DIR=${YARN_HOME}/lib/native
 export HADOOP_OPTS="-Djava.library.path=$YARN_HOME/lib"


바. 적용 : root@master:/usr/local# source /etc/profile
 
사. SSH 설정
 
root@master:~# ssh-keygen -t rsa -P ""
 
Generating public/private rsa key pair.
 Enter file in which to save the key (/root/.ssh/id_rsa):
 Created directory '/root/.ssh'.
 Your identification has been saved in /root/.ssh/id_rsa.
 Your public key has been saved in /root/.ssh/id_rsa.pub.
 The key fingerprint is:
 68:7f:1d:c4:3e:13:c1:8b:93:5b:c8:d5:e2:b6:6f:5f root@master
 The key's randomart image is:
 +--[ RSA 2048]----+
 |           ...   |
 |           .+..  |
 |         . *+o   |
 |       .  *o=.   |
 |      o S  ==.   |
 |     . .  ...+   |
 |        . . ..  E|
 |         .    o .|
 |             . ..|
 +-----------------+


아. root로 생성하면 /root/.ssh/에 생성됨(아래 작업은 master에서 한번 실행하고 node1~node4에 authorized_keys를 복사해준다.)
 
root@master:/root/.ssh# ll
 total 16
 drwx------  2 root root 4096 Apr 25 00:52 ./
 drwx------ 18 root root 4096 Apr 25 00:52 ../
 -rw-------  1 root root 1679 Apr 25 00:52 id_rsa
 -rw-r--r--  1 root root  393 Apr 25 00:52 id_rsa.pub
 

자.authorized_keys생성하고 각 서버에 복사해준다.
 
root@master:/root/.ssh# cat id_rsa.pub >> authorized_keys
root@master:/root/.ssh# ll
 total 20
 drwx------  2 root root 4096 Apr 25 00:56 ./
 drwx------ 18 root root 4096 Apr 25 00:52 ../
 -rw-r--r--  1 root root  393 Apr 25 00:56 authorized_keys
 -rw-------  1 root root 1679 Apr 25 00:52 id_rsa
 -rw-r--r--  1 root root  393 Apr 25 00:52 id_rsa.pub
 
master에서 생성하고 node에 복사해주면 master의 공개키를 node에 복사하므로 node1~node4는 master의 접속을 받아 들이게 된다.(상호 접근이 가능하도록 하려면 모든 서버에서 ssh-keygen을 실행후 각각의 서버에 생성된 authorized_keys 의 내용을 모아서 각서버의 authorized_keys 에 기록해준다.)
 
== 필요한 경로생성
 
root@master:~/hadoop/etc/hadoop# mkdir -p ${HADOOP_PREFIX}/hadoop/hdfs/namenode
root@master:~/hadoop/etc/hadoop# mkdir -p ${HADOOP_PREFIX}/hadoop/hdfs/datanode
root@master:~/hadoop/etc/hadoop# mkdir -p ${HADOOP_PREFIX}/hadoop/mapred/system
root@master:~/hadoop/etc/hadoop# mkdir -p ${HADOOP_PREFIX}/hadoop/mapred/local
 

 ==로그위치 지정(hadoop 로그)
 
vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
 
# Where log files are stored.  $HADOOP_HOME/logs by default.
 #export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER
의 주석을 풀고 아래와 같이 지정해줌
export HADOOP_LOG_DIR=/data/logs/$USER/hadoop


 ==로그위치 지정(yarn 로그)
 
vi /usr/local/hadoop/etc/hadoop/yarn-env.sh
 
# so that filenames w/ spaces are handled correctly in loops below
 IFS=
밑에 아래내용을 추가한다.
export YARN_LOG_DIR=/data/logs/$USER/yarn


 ==xml들 설정==
 
--hdfs-site.xml
 
 
 
<!-- Put site-specific property overrides in this file. -->
 <configuration>
    <property>
      <name>dfs.replications</name>
      <value>2</value>
    </property>
    <property>
      <name>dfs.namenode.name.dir</name>
      <value>/usr/local/hadoop/hdfs/namenode</value>
    <final>true</final>
    </property>
    <property>
      <name>dfs.datanode.data.dir</name>
      <value>/usr/local/hadoop/hdfs/datanode</value>
      <final>true</final>
    </property>
    <property>
      <name>dfs.permissions</name>
      <value>false</value>
    </property>
    <property>
       <name>dfs.http.address</name>
       <value>master:50070</value>
    </property>
    <property>
       <name>dfs.secondary.http.address</name>
       <value>master:50090</value>
    </property>
 </configuration>


 --core-site.xml
 
<configuration>
   <property>
      <name>fs.default.name</name>
      <value>hdfs://master:9000</value>
      <final>true</final>
   </property>
   <property>
      <name>hadoop.tmp.dir</name>
      <value>/usr/local/hadoop/hdfs/tmp</value>
   </property>
 </configuration>


 --mapred-site.xml
 
<configuration>
 <property>
     <name>mapreduce.framework.name</name>
     <value>yarn</value>
 </property>
 <property>
     <name>mapred.system.dir</name>
     <value>/usr/local/hadoop/mapred/system</value>
     <final>true</final>
 </property>
 <property>
     <name>mapred.local.dir</name>
     <value>/usr/local/hadoop/mapred/local</value>
     <final>true</final>
 </property>
 </configuration>

root@Bananapi:/usr/local/hadoop/conf# vi master
 
master
 
root@Bananapi:/usr/local/hadoop/conf# vi hadoop-env.sh
 
# The java implementation to use.  Required.
 
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun
 
export JAVA_HOME=/usr/local/jdk
 

 # 아래의 값을 설정하지 않으면.. job은 제출되지만.. 모니터링 화면의 목록에 나타나지 않고 0%에서 진행되지 않음
 
# Extra Java CLASSPATH elements.  Optional.
 
# export HADOOP_CLASSPATH=
 
export HADOOP_CLASSPATH=/usr/local/hadoop/lib
 
 
 
=====클러스터링을 위해서 각 서버에 복사해준다=====
 
--지금까지 설정한 jdk폴더 전체를 각 node에 복사한다.(node가 4개 이므로 4번 복사함)
 
--------------master에서 java를 설치한 위치는 /usr/local/jdk1.7.0_60 임
 
root@master:~# scp -r jdk1.7.0_60/ root@node1:/usr/local
 
root@master:~# scp -r jdk1.7.0_60/ root@node2:/usr/local
 
root@master:~# scp -r jdk1.7.0_60/ root@node3:/usr/local
 
root@master:~# scp -r jdk1.7.0_60/ root@node4:/usr/local
 

--지금까지 설정한 hadoop폴더 전체를 각 node에 복사한다.(node가 4개 이므로 4번 복사함)
 
--------------master에서 hadoop을 설치한 위치는 /usr/local/hadoop-2.6.0 임
 
root@master:~# scp -r hadoop-2.6.0/ root@node1:/usr/local
 
root@master:~# scp -r hadoop-2.6.0/ root@node2:/usr/local
 
root@master:~# scp -r hadoop-2.6.0/ root@node3:/usr/local
 
root@master:~# scp -r hadoop-2.6.0/ root@node4:/usr/local
 

--지금까지 설정한 환경설정을 각 node에 복사(설정)한다.(node가 4개 이므로 4곳에 돌일하게....)
 
root@master:~# scp /etc/profile root@node1:/etc/profile
 
root@master:~# scp /etc/profile root@node2:/etc/profile
 
root@master:~# scp /etc/profile root@node3:/etc/profile
 
root@master:~# scp /etc/profile root@node4:/etc/profile
 

---각각의 서버에서 source /etc/profile을 실행해준다.
 
---각각의 서버에서 심볼릭링크 2개를 생성해준다.
 
1. root@node1:~# ln -s jdk1.7.0_60/ jdk
 
2. root@node1:~# ln -s hadoop-2.6.0/ hadoop
 
....
 

자. namenode포맷
 
root@master:~# hdfs namenode -format
 
 
15/04/25 01:39:08 INFO namenode.NameNode: STARTUP_MSG:
  /************************************************************
  STARTUP_MSG: Starting NameNode
  STARTUP_MSG:   host = master/192.168.10.100
  STARTUP_MSG:   args = [-format]
  STARTUP_MSG:   version = 2.6.0
  STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
  STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
  STARTUP_MSG:   java = 1.7.0_60
  ************************************************************/
  15/04/25 01:39:08 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
  15/04/25 01:39:08 INFO namenode.NameNode: createNameNode [-format]
  Java HotSpot(TM) Client VM warning: You have loaded library /usr/local/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
  It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
  15/04/25 01:39:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  15/04/25 01:39:12 WARN common.Util: Path /usr/local/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
  15/04/25 01:39:12 WARN common.Util: Path /usr/local/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
  Formatting using clusterid: CID-887f6bfc-8820-46bd-acc1-54c213990208
  15/04/25 01:39:13 INFO namenode.FSNamesystem: No KeyProvider found.
  15/04/25 01:39:13 INFO namenode.FSNamesystem: fsLock is fair:true
  15/04/25 01:39:13 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
  15/04/25 01:39:13 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
  15/04/25 01:39:13 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
  15/04/25 01:39:13 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Apr 25 01:39:13
  15/04/25 01:39:13 INFO util.GSet: Computing capacity for map BlocksMap
  15/04/25 01:39:13 INFO util.GSet: VM type       = 32-bit
  15/04/25 01:39:13 INFO util.GSet: 2.0% max memory 966.8 MB = 19.3 MB
  15/04/25 01:39:13 INFO util.GSet: capacity      = 2^22 = 4194304 entries
  15/04/25 01:39:14 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
  15/04/25 01:39:14 INFO blockmanagement.BlockManager: defaultReplication         = 3
  15/04/25 01:39:14 INFO blockmanagement.BlockManager: maxReplication             = 512
  15/04/25 01:39:14 INFO blockmanagement.BlockManager: minReplication             = 1
  15/04/25 01:39:14 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
  15/04/25 01:39:14 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
  15/04/25 01:39:14 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
  15/04/25 01:39:14 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
  15/04/25 01:39:14 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
  15/04/25 01:39:14 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
  15/04/25 01:39:14 INFO namenode.FSNamesystem: supergroup          = supergroup
  15/04/25 01:39:14 INFO namenode.FSNamesystem: isPermissionEnabled = false
  15/04/25 01:39:14 INFO namenode.FSNamesystem: HA Enabled: false
  15/04/25 01:39:14 INFO namenode.FSNamesystem: Append Enabled: true
  15/04/25 01:39:15 INFO util.GSet: Computing capacity for map INodeMap
  15/04/25 01:39:15 INFO util.GSet: VM type       = 32-bit
  15/04/25 01:39:15 INFO util.GSet: 1.0% max memory 966.8 MB = 9.7 MB
  15/04/25 01:39:15 INFO util.GSet: capacity      = 2^21 = 2097152 entries
  15/04/25 01:39:15 INFO namenode.NameNode: Caching file names occuring more than 10 times
  15/04/25 01:39:15 INFO util.GSet: Computing capacity for map cachedBlocks
  15/04/25 01:39:15 INFO util.GSet: VM type       = 32-bit
  15/04/25 01:39:15 INFO util.GSet: 0.25% max memory 966.8 MB = 2.4 MB
  15/04/25 01:39:15 INFO util.GSet: capacity      = 2^19 = 524288 entries
  15/04/25 01:39:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
  15/04/25 01:39:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
  15/04/25 01:39:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
  15/04/25 01:39:15 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
  15/04/25 01:39:15 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
  15/04/25 01:39:15 INFO util.GSet: Computing capacity for map NameNodeRetryCache
  15/04/25 01:39:15 INFO util.GSet: VM type       = 32-bit
  15/04/25 01:39:15 INFO util.GSet: 0.029999999329447746% max memory 966.8 MB = 297.0 KB
  15/04/25 01:39:15 INFO util.GSet: capacity      = 2^16 = 65536 entries
  15/04/25 01:39:15 INFO namenode.NNConf: ACLs enabled? false
  15/04/25 01:39:15 INFO namenode.NNConf: XAttrs enabled? true
  15/04/25 01:39:15 INFO namenode.NNConf: Maximum size of an xattr: 16384
  15/04/25 01:39:16 INFO namenode.FSImage: Allocated new BlockPoolId: BP-668396951-192.168.10.100-1429897155803
  15/04/25 01:39:16 INFO common.Storage: Storage directory /usr/local/hadoop-2.6.0/hdfs/namenode has been successfully formatted.
  15/04/25 01:39:17 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
 15/04/25 01:39:17 INFO util.ExitUtil: Exiting with status 0
  15/04/25 01:39:17 INFO namenode.NameNode: SHUTDOWN_MSG:
  /************************************************************
  SHUTDOWN_MSG: Shutting down NameNode at master/192.168.10.100
  ************************************************************/


 

------start-all.sh했을때 로그--------------------------
 
root@master:~# start-all.sh
 
 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh

 15/04/26 23:29:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

 Starting namenodes on [master]
 master: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-namenode-master.out
 node4: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-node4.out
 node2: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-node2.out
 node1: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-node1.out
 node3: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-node3.out
 Starting secondary namenodes [node1]
 node1: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-node1.out
 15/04/26 23:30:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 starting yarn daemons
 starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-master.out
 node3: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-node3.out
 node2: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-node2.out
 node4: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-node4.out
 node1: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-node1.out


 

  ===>dfs기동
 
root@master:~/hadoop/etc/hadoop# start-dfs.sh
 
15/04/25 02:42:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  Starting namenodes on [master]
  master: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-namenode-master.out
  master: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-master.out
  Starting secondary namenodes [master]
  master: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-master.out
  15/04/25 02:43:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable


 
 
 
==>start-yarn.sh기동
 
 root@master:~/hadoop/etc/hadoop# start-yarn.sh
 
starting yarn daemons
  starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-master.out
  master: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-root-nodemanager-master.out


 
 
 
==>hadoop 계정이 사용할 hdfs계정생성(hadoop계정으로 실행) - /user는 root계정으로 생성한다.
 
root@master:/home/hadoop$ hadoop fs -mkdir /user
 
hadoop@master:/home/hadoop$ hadoop fs -mkdir /user/hadoop
 
( * hdfs내에 계정을 생성하지 않고 hadoop fs -mkdir abc와 같이 abc폴더를 만들려면 오류가 발생하므로 반드시
 
만들어주어야함-이때 사용할 계정으로 만들어야함(예, hadoop))
 
 
 
 ----------------------------------------------------------------------------------
 
 
 

파. sample jar파일에 있는 wordcount를 실행시켜서 정상작동하는지 확인한다.(hadoop계정으로 실행한다)
 
(가) data경로 생성
 
   : hadoop fs -mkdir /user/hadoop/in
 
 
 

 (나)  데이터 upload
 
hadoop@master:~/hadoop/logs$ hadoop fs -put a.txt in/a.txt
 
 
15/04/26 17:20:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 hadoop@master:~/hadoop/logs$ hadoop fs -ls -R /user/hadoop
  15/04/26 17:21:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  drwxr-xr-x - hadoop supergroup 0 2015-04-26 17:21 /user/hadoop/in
  -rw-r--r-- 3 hadoop supergroup 119076 2015-04-26 17:20 /user/hadoop/in/a.txt


 
 
 
 
 (다) job실행hadoop@master:/data/home/hadoop$ yarn jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount in out

(hadoop 2.7.x의 경우 yarn jar $HOME/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount in out)
 
 
15/04/26 22:49:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 15/04/26 22:50:03 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.10.100:8050
 15/04/26 22:50:05 INFO mapreduce.JobSubmissionFiles: Permissions on staging directory /tmp/hadoop-yarn/staging/hadoop/.staging are incorrect: rwxrwxrwx. Fixing permissions to correct value rwx------
15/04/26 22:50:09 INFO input.FileInputFormat: Total input paths to process : 1
 15/04/26 22:50:10 INFO mapreduce.JobSubmitter: number of splits:1
 15/04/26 22:50:12 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1430058040789_0004
 15/04/26 22:50:14 INFO impl.YarnClientImpl: Submitted application application_1430058040789_0004
 15/04/26 22:50:14 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1430058040789_0004/
 15/04/26 22:50:14 INFO mapreduce.Job: Running job: job_1430058040789_0004
 15/04/26 22:50:45 INFO mapreduce.Job: Job job_1430058040789_0004 running in uber mode : false
 15/04/26 22:50:45 INFO mapreduce.Job:  map 0% reduce 0%
 15/04/26 22:51:24 INFO mapreduce.Job:  map 100% reduce 0%
 15/04/26 22:51:47 INFO mapreduce.Job:  map 100% reduce 100%
 15/04/26 22:51:49 INFO mapreduce.Job: Job job_1430058040789_0004 completed successfully
 15/04/26 22:51:50 INFO mapreduce.Job: Counters: 49
         File System Counters
                 FILE: Number of bytes read=75
                 FILE: Number of bytes written=211791
                 FILE: Number of read operations=0
                 FILE: Number of large read operations=0
                 FILE: Number of write operations=0
                 HDFS: Number of bytes read=150
                 HDFS: Number of bytes written=53
                 HDFS: Number of read operations=6
                 HDFS: Number of large read operations=0
                 HDFS: Number of write operations=2
         Job Counters
                 Launched map tasks=1
                 Launched reduce tasks=1
                 Data-local map tasks=1
                 Total time spent by all maps in occupied slots (ms)=62958
                 Total time spent by all reduces in occupied slots (ms)=41772
                 Total time spent by all map tasks (ms)=31479
                 Total time spent by all reduce tasks (ms)=20886
                 Total vcore-seconds taken by all map tasks=31479
                 Total vcore-seconds taken by all reduce tasks=20886
                 Total megabyte-seconds taken by all map tasks=32234496
                 Total megabyte-seconds taken by all reduce tasks=21387264

         Map-Reduce Framework
                 Map input records=3
                 Map output records=4
                 Map output bytes=61
                 Map output materialized bytes=75
                 Input split bytes=104
                 Combine input records=4
                 Combine output records=4
                 Reduce input groups=4
                 Reduce shuffle bytes=75
                 Reduce input records=4
                 Reduce output records=4
                 Spilled Records=8
                 Shuffled Maps =1
                 Failed Shuffles=0
                 Merged Map outputs=1
                 GC time elapsed (ms)=1733
                 CPU time spent (ms)=6690
                 Physical memory (bytes) snapshot=220033024
                 Virtual memory (bytes) snapshot=716484608
                 Total committed heap usage (bytes)=133869568
         Shuffle Errors
                 BAD_ID=0
                 CONNECTION=0
                 IO_ERROR=0
                 WRONG_LENGTH=0
                 WRONG_MAP=0
                 WRONG_REDUCE=0
         File Input Format Counters
                 Bytes Read=46
         File Output Format Counters
                 Bytes Written=53


 
 
 

 * job실행시 /tmp/...에 대한 권한오류가 발생할 수 있는데.. OS폴더가 아닌 HDFS의 /tmp를 의미한다.
 
가. 실행하는 계정(예, hadoop)으로 권한을 변경해준다.
 
나. /tmp를 지우고 hadoop계정으로 hadoop fs -mkdir /tmp를 실행하여 hadoop계정의 /tmp를 만들고 hadoop게정으로만 작업을
 
수행한다.
 
다. root@master:~/hadoop/etc# hadoop fs -chmod -R 1755 /tmp
 
    로 모두가 사용할 수 있으나 sticky bit를 부여하여 생성한 게정만 삭제할 수 있도록 한다.
 
 
 

------아래-----
 
hadoop@master:/data/home/hadoop/work/tmp$ hadoop fs -ls -R /
 
15/04/26 22:31:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 
drwx------   - hadoop supergroup          0 2015-04-26 22:28 /tmp
 
drwx------   - hadoop supergroup          0 2015-04-26 22:28 /tmp/hadoop-yarn
 
drwx------   - hadoop supergroup          0 2015-04-26 22:28 /tmp/hadoop-yarn/staging
 
drwx------   - hadoop supergroup          0 2015-04-26 22:28 /tmp/hadoop-yarn/staging/hadoop
 
drwx------   - hadoop supergroup          0 2015-04-26 22:28 /tmp/hadoop-yarn/staging/hadoop/.staging
 
drwxr-xr-x   - root   supergroup          0 2015-04-26 22:29 /user
 
drwxr-xr-x   - hadoop supergroup          0 2015-04-26 22:29 /user/hadoop
 
---------------------------
 
 
 
(라) 결과확인
 
hadoop@master:/data/home/hadoop$ hadoop fs -ls -R out
 
15/04/26 22:58:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 
-rw-r--r--   3 hadoop supergroup          0 2015-04-26 22:51 out/_SUCCESS
 
-rw-r--r--   3 hadoop supergroup         53 2015-04-26 22:51 out/part-r-00000
 
 
 
  *결과값 확인 :
 
hadoop@master:/data/home/hadoop$ hadoop fs -cat out/part-r-00000
 
15/04/26 22:59:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 
hsldfhsladfhshjr        1
 
sadflsahdlfk    1
 
skdfsdf 1
 
slkjfl  1
 
 
 
----------------------------------ResourceManager HA설정(Zookeeper가 기동중이어야함)-------------------
 
yarn-site.xml에 아래를 추가해서 stop-all.sh -> start-all.sh하여 재기동 해준다.
 
또한 node2에 로그인해서 yarn-daemon.sh start resourcemanager을 실행해서 명시적으로 resourcemanager를 기동해주어야 한다.
 
 
 

 *ha 기동확인 :
 
- master노드에서 yarn rmadmin -getServiceState rm1 했을때 화면에 active 보이면 성공
 
- master노드에서 yarn rmadmin -getServiceState rm2 했을때 화면에 standby 보이면 성공
 
 
 
<property>
     <name>yarn.resourcemanager.ha.enabled</name>
     <value>true</value>
   </property>
   <property>
     <name>yarn.resourcemanager.cluster-id</name>
     <value>rmcluster</value>
   </property>
   <property>
     <name>yarn.resourcemanager.ha.rm-ids</name>
     <value>rm1,rm2</value>
   </property>
   <property>
     <name>yarn.resourcemanager.hostname.rm1</name>
     <value>master</value>
   </property>
   <property>
     <name>yarn.resourcemanager.hostname.rm2</name>
     <value>node1</value>
   </property>
   <property>
     <name>yarn.resourcemanager.zk-address</name>
     <value>master:2181,node1:2181,node2:2181</value>
   </property>


 
----------------------------------NameNode HA설정(Zookeeper가 기동중이어야함)-------------------
 
hdfs-site.xml에서 아래와 같이 설정한다.
 
 

  <property> <name>dfs.nameservices</name> <value>mycluster</value> </property>
  <property> <name>dfs.ha.namenodes.mycluster</name> <value>nn1,nn2</value> </property>
  <property> <name>dfs.namenode.rpc-address.mycluster.nn1</name> <value>master:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn2</name> <value>node1:8020</value> </property>
  <property> <name>dfs.namenode.http-address.mycluster.nn1</name> <value>master:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn2</name> <value>node1:50070</value> </property>
  <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://node1:8485;node2:8485;node3:8485/mycluster</value> </property>
  <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property>
  <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
      </property>
     <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/root/.ssh/id_rsa</value>
      </property>
  <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence(root:22)</value>
      </property>
      <property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
       <value>30000</value>
      </property>


 core-site.xm에 아래의 설정을 추가한다.
 
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://mycluster</value>
  </property>
  <property>
    <name>dfs.journalnode.edits.dir</name>
    <value>/data/journal/data</value>
  </property>


설정이 완료되면
 가. 최초설정이면 bin/hdfs namenode -format을 실행한다.
 나. non-HA에서 HA로 변경이라면 bin/hdfs namenode -bootstrapStandby를 실행해서 NameNode metadata가 다른 journalnode에 복사되도록 해준다.
다. ZooKeeper에 HA상태를 초기화시켜준다.
root@master:~/hadoop/etc/hadoop# bin/hdfs zkfc -formatZK

 
 
15/05/05 10:35:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  15/05/05 10:35:36 INFO tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at master/192.168.10.100:8020
  15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
  15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Client environment:host.name=master
  15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_60
  15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
  15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/local/jdk1.7.0_60/jre
  15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Client environment:java.class.path=...생략
15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/local/hadoop/lib
 15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
  15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
  15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
  15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Client environment:os.arch=arm
  15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Client environment:os.version=3.4.103
  15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Client environment:user.name=root
  15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
  15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Client environment:user.dir=/usr/local/hadoop-2.6.0/etc/hadoop
  15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=master:2181,node1:2181,node2:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@7f8123
  15/05/05 10:35:37 INFO zookeeper.ClientCnxn: Opening socket connection to server master/192.168.10.100:2181. Will not attempt to authenticate using SASL (unknown error)
  15/05/05 10:35:37 INFO zookeeper.ClientCnxn: Socket connection established to master/192.168.10.100:2181, initiating session
  15/05/05 10:35:37 INFO zookeeper.ClientCnxn: Session establishment complete on server master/192.168.10.100:2181, sessionid = 0x14d1fa1e7070003, negotiated timeout = 5000
  15/05/05 10:35:37 INFO ha.ActiveStandbyElector: Session connected.
  15/05/05 10:35:37 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
  15/05/05 10:35:37 INFO zookeeper.ZooKeeper: Session: 0x14d1fa1e7070003 closed
  15/05/05 10:35:37 INFO zookeeper.ClientCnxn: EventThread shut down

 
라. start-dfs.sh를 실행해준다.

 automatic failover가 설정되면 자동으로 ZKFC데몬을 띄워주지만 수동으로 띄우는 경우는 hdfs start zkfc를 실행해준다.
 
 
--standby namenode를 띄워준다(node1에서 실행)
 
root@node1:~/hadoop/logs# hdfs namenode -bootstrapStandby
 
15/05/05 15:00:18 INFO namenode.NameNode: STARTUP_MSG:
  /************************************************************
  STARTUP_MSG: Starting NameNode
  STARTUP_MSG:   host = node1/192.168.10.101
  STARTUP_MSG:   args = [-bootstrapStandby]
  STARTUP_MSG:   version = 2.6.0
  STARTUP_MSG:   classpath = 생략
STARTUP_MSG:   java = 1.7.0_60
  ************************************************************/
  15/05/05 15:00:18 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
  15/05/05 15:00:18 INFO namenode.NameNode: createNameNode [-bootstrapStandby]
  15/05/05 15:00:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  =====================================================
  About to bootstrap Standby ID nn2 from:
             Nameservice ID: mycluster
          Other Namenode ID: nn1
    Other NN's HTTP address: http://master:50070
    Other NN's IPC  address: master/192.168.10.100:9000
               Namespace ID: 1329206419
              Block pool ID: BP-1449891086-192.168.10.100-1430808045190
                 Cluster ID: CID-c651ea9e-fef2-4066-a862-17c09bd4a4b5
             Layout version: -60
  =====================================================
  15/05/05 15:00:25 INFO common.Storage: Storage directory /data/dfs/namenode has been successfully formatted.
  15/05/05 15:00:29 INFO namenode.TransferFsImage: Opening connection to http://master:50070/imagetransfer?getimage=1&txid=0&storageInfo=-60:1329206419:0:CID-c651ea9e-fef2-4066-a862-17c09bd4a4b5
  15/05/05 15:00:30 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds
  15/05/05 15:00:30 INFO namenode.TransferFsImage: Transfer took 0.04s at 0.00 KB/s
  15/05/05 15:00:30 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000 size 351 bytes.
  15/05/05 15:00:30 INFO util.ExitUtil: Exiting with status 0
  15/05/05 15:00:30 INFO namenode.NameNode: SHUTDOWN_MSG:
 /************************************************************
  SHUTDOWN_MSG: Shutting down NameNode at node1/192.168.10.101
  ************************************************************/


----------------------------------JobHistoryServer설정-------------------
 
아래와 같이 설정후에 mr-jobhistory-daemon.sh start historyserver를 실행해서 데몬을 띄워줘야한다.
 
 가. mapred-site.xml

  <property>

    <name>mapreduce.jobtracker.staging.root.dir</name>

    <value>file:///data/hadoop/tmp/staging</value>

  </property>

  <property>

    <name>mapreduce.jobtracker.http.address</name>

    <value>sda1:50030</value>

  </property>

  <property>

    <name>mapreduce.jobhistory.address</name>

    <value>master:10020</value>

  </property>

  <property>

    <name>mapreduce.jobhistory.webapp.address</name>

    <value>master:19888</value>

  </property>


나. yarn-site.xml


<property>
     <name>yarn.log.server.url</name>
     <value>http://master:19888/jobhistory/logs</value>
</property>
<property> 
  <name>yarn.log-aggregation-enable</name> 
  <value>true</value>
</property>
<property>
 <name>yarn.nodemanager.log.retain-seconds</name>
 <value>900000</value>
</property>
<property>
 <name>yarn.nodemanager.remote-app-log-dir</name>
 <value>/app-logs</value>
</property>



 *부가적인 JobHistoryServer설정값
mapreduce.jobhistory.address: MapReduce JobHistory Server host:port Default port is 10020.
 mapreduce.jobhistory.webapp.address: MapReduce JobHistory Server Web UI host:port Default port is 19888.
 mapreduce.jobhistory.intermediate-done-dir: Directory where history files are written by MapReduce jobs (in HDFS). Default is /mr-history/tmp
 mapreduce.jobhistory.done-dir: Directory where history files are managed by the MR JobHistory Server (in HDFS). Default is /mr-history/done
 
 
 
모니터링
http://192.168.10.100:50070/dfshealth.html#tab-overview
 
 














번호 제목 날짜 조회 수
122 hbase가 기동시키는 zookeeper에서 받아드리는 ip가 IPv6로 사용되는 경우가 있는데 이를 IPv4로 강제적용하는 방법 2015.05.08 1188
121 hbase CustomFilter만들기 (0.98.X이상) 2015.05.08 1046
120 znode /hbase recursive하게 지우기 2015.05.06 826
119 java.lang.IllegalArgumentException: Does not contain a valid host:port authority: master 오류해결방법 2015.05.06 578
118 hadoop 2.6.0 기동(에코시스템 포함)및 wordcount 어플리케이션을 이용한 테스트 2015.05.05 3900
117 oozie 4.1 설치 - maven을 이용한 source compile on hadoop 2.5.2 with postgresql 9.3 2015.04.30 1353
116 hive 0.13.1 설치 + meta정보는 postgresql 9.3에 저장 2015.04.30 982
115 HBase 0.98.12(1.2.5) for hadoop2 설치-5대에 완전분산모드 (HDFS HA상테) 2015.04.29 1465
114 Hadoop - 클러스터 세팅및 기동 2015.04.28 941
113 zookeeper 3.4.6 설치(3대) 2015.04.28 1343
112 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable원인 2015.04.27 1145
» bananapi 5대(ubuntu계열 리눅스)에 yarn(hadoop 2.6.0)설치하기-ResourceManager HA/HDFS HA포함, JobHistory포함 2015.04.24 19462
110 scan의 startrow, stoprow지정하는 방법 2015.04.08 1147
109 SASL configuration failed: javax.security.auth.login.LoginException: java.lang.NullPointerException 오류 해결방법 2015.04.02 1036
108 kafka의 re-balance를 이용하여 consumer를 multi thread로 돌려서 topic의 partitions을 활용 2015.03.31 1412
107 Using The ZooKeeper CLI에서 zkCli의 위치 2014.11.02 1125
106 [번역] solr 검색 엔진 튜토리얼 2014.10.07 733
105 solr vs elasticsearch 비교2 2014.09.29 1533
104 solr설치및 적용관련 file 2014.09.27 2257
103 solr에서 한글사용시 주의점 2014.09.26 827
위로