Cloudera CDH/CDP 및 Hadoop EcoSystem, Semantic IoT등의 개발/운영 기술을 정리합니다. gooper@gooper.com로 문의 주세요.
참고 : http://www.programering.com/a/MzMwQDMwATU.html
Today in the test by JAVA remote AIX host, suddenly thought of Linux before installing Hadoop, but do not try to Aix, would like to know whether Hadoop is installed under the Aix what will be different, so be prompted by a sudden impulse to install it again, recording the process is as follows:
1. Aix 압축을 풀수 있는 소프트웨어를 설치하고, Java를 설치한다.
2. Hadoop0.21.0 version 을 다운로드하고 특정디렉토리에 unzip한다. 예를 들어 /home/cqq/hadoop-0.21.0
3. Hadoop 환경변수를 아래와 같이 설정한다.
export HADOOP_HOME=/home/cqq/hadoop-0.21.0
export HADOOP_CONF_DIR=/home/cqq/hadoop-0.21.0/conf
export PATH=$PATH:$HADOOP_HOME/bin;
4 Buddha 가 정상적으로 설치되는지 확인후에 Hadoop을 설치하면서 아래와 같은 팁을 얻었다.
[bash: 패스에 지정된 파일이나 디렉토리가 존재하지 않는다]
이말은 AIX의 디폴트 shell이 KSH이기 때문에 몇몇의 bash 명령문이 지원되지 않아서 발생하는 문제이다.
그래서 먼저 bash shell을 다운로드하여 설치하여준다.(
가. 다운로드: http://www-03.ibm.com/systems/power/software/aix/linux/toolbox/alpha.html
나. AIX version에 맞는 RPM버전을 다운로드한다. 나는 bash-4.2-1.aix6.1.ppc.rpm를 다운로드 받았다,
다. RPM packets을 AIX server에 업로드하고 설치한다(설치: rpm -ivh bash-4.2-1.aix6.1.ppc.rpm)
라. Hadoop설치후에 JAVA_HOME을 설정한다.
5. Hadoop으로 다시 가서 [사용법: Hadoop [--config] confdir] COMMAND tips, 성공적으로 설치되었다.
6. Hadoop을 시작하고 prompts에 따라서 주요한 몇몇을 설정한다.
Description: here is the installation of stand-alone version of the test, as for similar real cluster version of the installation, configuration of master and slave.
================================================================================
1. bash설치 : http://egloos.zum.com/program/v/1373097
-------------------bash설치 완료화면
IUDGTMP01:/engine/bigdata# rpm -ivh bash-4.3.30-1.aix6.1.ppc.rpm
bash ##################################################
## Binary "bash" is avaible in 32bit and 64bit ##
The default used is 64bit
Please change symbolic link
for "bash" in /bin directory
To do that type:
# rm -f /bin/bash
# ln -sf /opt/freeware/bin/bash_32 /bin/bash
2. JAVA_HOME등 설정 및 적용
수정 : vi ~/.profile
JAVA_HOME=/usr/java7_64
PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:.:$JAVA_HOME/bin
적용 : $ . ~/.profile
7. hadoop 설치및 테스트
가.. hadoop-2.6.4.tar.gz를 다운로드하여 업로드하고 압축을 푼다.
가. $gzip -d hadoop*
나. $tar -xvf hadoop*
(예,
$mv hadoop-2.7.2-b.tar.gz hadoop-2.7.2.tar.gz
$gzip -d hadoop-2.7.2.tar.gz
$tar -xvf hadoop-2.7.2.tar
)
나. 심볼릭 링크 생성
$ ln -s hadoop-2.6.4 hadoop
다. start-all.sh 실행(test용, (다른 설정하지 않고 바로 실행한경우임))
sbin/start-all.sh
--> 실행결과
$ ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
16/09/20 20:14:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
The authenticity of host 'localhost (127.0.0.1)' can't be established.
RSA key fingerprint is fd:61:e8:11:8e:ee:59:09:79:bd:85:26:7d:3a:1d:c5.
Are you sure you want to continue connecting (yes/no)? yes
bigdata@localhost's password:
localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
localhost: Error: JAVA_HOME is not set and could not be found.
bigdata@localhost's password:
localhost: Error: JAVA_HOME is not set and could not be found.
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
RSA key fingerprint is fd:61:e8:11:8e:ee:59:09:79:bd:85:26:7d:3a:1d:c5.
Are you sure you want to continue connecting (yes/no)? yes
bigdata@0.0.0.0's password:
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
0.0.0.0: Error: JAVA_HOME is not set and could not be found.
16/09/20 20:16:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /engine/bigdata/hadoop-2.6.4/logs/yarn-bigdata-resourcemanager-IUDGTMP01.out
bigdata@localhost's password:
localhost: Error: JAVA_HOME is not set and could not be found.
마. 각종 conf정보 수정
8. hadoop기동
* 경로 생성
#1, #2 for dfs
mkdir /engine/bigdata/hadoop-2.7.2/dfs
mkdir /engine/bigdata/hadoop-2.7.2/dfs/namenode
#2, #3,#4 for journal
mkdir /engine/bigdata/hadoop/journal
mkdir /engine/bigdata/hadoop/journal/data
가. sbin/hdfs zkfc -formatZK (#1)
나. sbin/hadoop-daemon.sh start journalnode (#2, #3, #4)
다. sbin/hdfs namenode -format (#1, #2(namenode가 포맷되지 않았다고 하면서 namenode가 기동되지 않는 경우 #2에서 namenode를 format해준다.)
라. sbin/hadoop-daemon.sh start namenode (#1, #2)
* 확인
http://XXX.XXX.XXX.XXX:8088/
http://XXX.XXX.XXX.XX1:50070/ (namenode #1확인)
http://XXX.XXX.XXX.XX2:50070/ (namenode #2확인)