메뉴 건너뛰기

Cloudera, BigData, Semantic IoT, Hadoop, NoSQL

Cloudera CDH/CDP 및 Hadoop EcoSystem, Semantic IoT등의 개발/운영 기술을 정리합니다. gooper@gooper.com로 문의 주세요.


1. maven다운로드
cd /usr/local

wget http://mirror.apache-kr.org/maven/maven-3/3.2.5/binaries/apache-maven-3.2.5-bin.tar.gz

--2015-04-30 15:32:17--  http://mirror.apache-kr.org/maven/maven-3/3.2.5/binaries/apache-maven-3.2.5-bin.tar.gz
Resolving mirror.apache-kr.org... 182.161.117.136
Connecting to mirror.apache-kr.org|182.161.117.136|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7956528 (7.6M) [application/x-gzip]
Saving to: “apache-maven-3.2.5-bin.tar.gz”

100%[==========================================================================================================================>] 7,956,528   11.2M/s   in 0.7s    

2015-04-30 15:32:23 (11.2 MB/s) - “apache-maven-3.2.5-bin.tar.gz” saved [7956528/7956528]

2. 압축풀기
tar xvfz apache-maven-3.2.5-bin.tar.gz

3. 링크걸기
 ln -s apache-maven-3.2.5 maven

4. M2_HOME환경변수및 PATH 설정
vi /etc/profile
export M2_HOME=/usr/local/maven
export PATH=$PATH:$M2_HOME/bin

* 변경반영 : source /etc/profile

5. oozie 4.1다운로드
  wget http://mirror.apache-kr.org/oozie/4.1.0/oozie-4.1.0.tar.gz
5-1. 기본적으로 hadoop 1.1.1을 기준으로 build되므로  hadoop 2.5.2로 컴파일 되도록 정보르 수정해준다.

6. 컴파일(default는 hadoop 1.1.1버젼으로 컴파일 되므로 -P hadoop-2를 지정하여 hadoop 2버젼으로 컴파일한다)
 cd oozie-4.1.0/bin
 ./mkdistro.sh -P hadoop-2 -DskipTests
 (컴파일 완료후 결과파일 생성위치 : /home/hadoop/oozie-4.1.0/distro/target/oozie-4.1.0-distro.tar.gz)

7. 6번 빌드파일을 설치할 위치로 복사후 압축풀기및 링크생성
cp ./oozie-4.1.0-distro.tar.gz /usr/local
cd /usr/local
tar xvfz oozie-4.1.0-distro.tar.gz
ln -s oozie-4.1.0/ oozie

8. war파일에 추가할 hadooplibs및 및 각종 jar파일 복사
mkdir libext
wget -P libext http://extjs.com/deploy/ext-2.2.zip
cd libext
wget http://dev.sencha.com/deploy/ext-2.2.zip
cp -R ../oozie-4.1.0/hadooplibs/hadoop-2/target/hadooplibs/hadooplib-2.4.1.oozie-4.0.1/* libext
cp mysql-connector~.jar libext

* war파일 변경
[root@master oozie]$ oozie-setup.sh prepare-war
* 이전에 oozie-setup.sh 실행시 -extjs옵션을 주었는데 지금은 별도로 옵션을 주지 않고 libext폴더에 ext-2.2.zip파일을 복사애 놓고
oozie-setup.sh을 실행하면 자동으로 찾아서 설정해주는것으로 바뀌었다.

  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"

INFO: Adding extension: /hadoop/oozie/libext/activation-1.1.jar
INFO: Adding extension: /hadoop/oozie/libext/avro-1.7.4.jar
INFO: Adding extension: /hadoop/oozie/libext/commons-beanutils-1.7.0.jar
INFO: Adding extension: /hadoop/oozie/libext/commons-beanutils-core-1.8.0.jar
INFO: Adding extension: /hadoop/oozie/libext/commons-cli-1.2.jar
INFO: Adding extension: /hadoop/oozie/libext/commons-codec-1.4.jar
INFO: Adding extension: /hadoop/oozie/libext/commons-collections-3.2.1.jar
INFO: Adding extension: /hadoop/oozie/libext/commons-compress-1.4.1.jar
INFO: Adding extension: /hadoop/oozie/libext/commons-configuration-1.6.jar
INFO: Adding extension: /hadoop/oozie/libext/commons-digester-1.8.jar
INFO: Adding extension: /hadoop/oozie/libext/commons-httpclient-3.1.jar
INFO: Adding extension: /hadoop/oozie/libext/commons-io-2.1.jar
INFO: Adding extension: /hadoop/oozie/libext/commons-lang-2.4.jar
INFO: Adding extension: /hadoop/oozie/libext/commons-logging-1.1.jar
INFO: Adding extension: /hadoop/oozie/libext/commons-math3-3.1.1.jar
INFO: Adding extension: /hadoop/oozie/libext/commons-net-3.1.jar
INFO: Adding extension: /hadoop/oozie/libext/guava-11.0.2.jar
INFO: Adding extension: /hadoop/oozie/libext/hadoop-annotations-2.3.0.jar
INFO: Adding extension: /hadoop/oozie/libext/hadoop-auth-2.3.0.jar
INFO: Adding extension: /hadoop/oozie/libext/hadoop-client-2.3.0.jar
INFO: Adding extension: /hadoop/oozie/libext/hadoop-common-2.3.0.jar
INFO: Adding extension: /hadoop/oozie/libext/hadoop-hdfs-2.3.0.jar
INFO: Adding extension: /hadoop/oozie/libext/hadoop-mapreduce-client-app-2.3.0.jar
INFO: Adding extension: /hadoop/oozie/libext/hadoop-mapreduce-client-common-2.3.0.jar
INFO: Adding extension: /hadoop/oozie/libext/hadoop-mapreduce-client-core-2.3.0.jar
INFO: Adding extension: /hadoop/oozie/libext/hadoop-mapreduce-client-jobclient-2.3.0.jar
INFO: Adding extension: /hadoop/oozie/libext/hadoop-mapreduce-client-shuffle-2.3.0.jar
INFO: Adding extension: /hadoop/oozie/libext/hadoop-yarn-api-2.3.0.jar
INFO: Adding extension: /hadoop/oozie/libext/hadoop-yarn-client-2.3.0.jar
INFO: Adding extension: /hadoop/oozie/libext/hadoop-yarn-common-2.3.0.jar
INFO: Adding extension: /hadoop/oozie/libext/hadoop-yarn-server-common-2.3.0.jar
INFO: Adding extension: /hadoop/oozie/libext/httpclient-4.2.5.jar
INFO: Adding extension: /hadoop/oozie/libext/httpcore-4.2.4.jar
INFO: Adding extension: /hadoop/oozie/libext/jackson-core-asl-1.8.8.jar
INFO: Adding extension: /hadoop/oozie/libext/jackson-mapper-asl-1.8.8.jar
INFO: Adding extension: /hadoop/oozie/libext/jaxb-api-2.2.2.jar
INFO: Adding extension: /hadoop/oozie/libext/jersey-core-1.9.jar
INFO: Adding extension: /hadoop/oozie/libext/jetty-util-6.1.26.jar
INFO: Adding extension: /hadoop/oozie/libext/jsr305-1.3.9.jar
INFO: Adding extension: /hadoop/oozie/libext/log4j-1.2.16.jar
INFO: Adding extension: /hadoop/oozie/libext/paranamer-2.3.jar
INFO: Adding extension: /hadoop/oozie/libext/postgresql-9.3-1103.jdbc4.jar
INFO: Adding extension: /hadoop/oozie/libext/protobuf-java-2.5.0.jar
INFO: Adding extension: /hadoop/oozie/libext/servlet-api-2.5.jar
INFO: Adding extension: /hadoop/oozie/libext/slf4j-api-1.6.6.jar
INFO: Adding extension: /hadoop/oozie/libext/slf4j-log4j12-1.6.6.jar
INFO: Adding extension: /hadoop/oozie/libext/snappy-java-1.0.4.1.jar
INFO: Adding extension: /hadoop/oozie/libext/stax-api-1.0-2.jar
INFO: Adding extension: /hadoop/oozie/libext/xmlenc-0.52.jar
INFO: Adding extension: /hadoop/oozie/libext/xz-1.0.jar
INFO: Adding extension: /hadoop/oozie/libext/zookeeper-3.4.5.jar

New Oozie WAR file with added 'ExtJS library, JARs' at /hadoop/oozie/oozie-server/webapps/oozie.war


INFO: Oozie is ready to be started

9. 환경변수 설정(/etc/profile)
export OOZIE_HOME=/hadoop/oozie
export PATH=$PATH:$OOZIE_HOME/bin

* 변경반영 : source /etc/profile

10. db관련정보 수정(oozie-site.xml)
   <property>
        <name>oozie.db.schema.name</name>
        <value>ooziedb</value>
        <description>
            Oozie DataBase Name
        </description>
    </property>

    <property>
        <name>oozie.service.JPAService.create.db.schema</name>
        <value>false</value>
        <description>
            Creates Oozie DB.

            If set to true, it creates the DB schema if it does not exist. If the DB schema exists is a NOP.
            If set to false, it does not create the DB schema. If the DB schema does not exist it fails start up.
        </description>
    </property>
    <property>
        <name>oozie.service.JPAService.jdbc.driver</name>
        <value>org.postgresql.Driver</value>
        <description>
            JDBC driver class.
        </description>
    </property>

    <property>
        <name>oozie.service.JPAService.jdbc.url</name>
        <value>jdbc:postgresql://node1//${oozie.db.schema.name}</value>
        <description>
            JDBC URL.
        </description>
    </property>


    <property>
        <name>oozie.service.JPAService.jdbc.username</name>
        <value>oozie</value>
        <description>
            DB user name.
        </description>
    </property>

    <property>
        <name>oozie.service.JPAService.jdbc.password</name>
        <value>oozie_pass</value>
        <description>
            DB user password.

            IMPORTANT: if password is emtpy leave a 1 space string, the service trims the value,
                       if empty Configuration assumes it is NULL.
        </description>
    </property>

*아래 부분에 대한 주석을 풀어주고 #USER#값을 반드시 oozie실행 게정(예, hadoop 혹은 oozie)으로 변경한다.
(여기서 지정된것은 core-site.xml에 지정한 proxy와 반드시 일치할 필요가 없나?? 여기는 oozie로 하고 core-site.xml에는 hadoop
으로 지정해도 실행된다.??)

<property>

        <name>oozie.service.ProxyUserService.proxyuser.#USER#.hosts</name>

        <value>*</value>

        <description>

            List of hosts the '#USER#' user is allowed to perform 'doAs'

            operations.


            The '#USER#' must be replaced with the username o the user who is

            allowed to perform 'doAs' operations.


            The value can be the '*' wildcard or a list of hostnames.


            For multiple users copy this property and replace the user name

            in the property name.

        </description>

    </property>


    <property>

        <name>oozie.service.ProxyUserService.proxyuser.hadoop.groups</name>

        <value>*</value>

        <description>

            List of groups the '#USER#' user is allowed to impersonate users

            from to perform 'doAs' operations.


            The '#USER#' must be replaced with the username o the user who is

            allowed to perform 'doAs' operations.


            The value can be the '*' wildcard or a list of groups.


            For multiple users copy this property and replace the user name

            in the property name.

        </description>

    </property>


11. core-site.xml

oozie job을 실행하는 계정에 대한 권한부여(두개다 <value></value>에 *을 부여해도됨)


<property>
         <name>hadoop.proxyuser.[userId].hosts</name>
         <value>MasterNode</value>
</property>
<property>
         <name>hadoop.proxyuser.[userId].groups</name>
         <value>[userId]</value>
</property>


12. 더비대신 mysql사용하도록 설정된 정보를 이용하여 db및 관련table생성
./ooziedb.sh create -sqlfile oozie.sql -run

13. 확인(command)
[root@master logs]$ oozie admin -oozie http://localhost:11000/oozie -status
System mode: NORMAL

14. 확인URL
http://master:11000/oozie/
* 어느정도 시간이 지나면 오류가 발생하면서 oozied가 다운되면서 catalina.out파일에 
java.lang.NoClassDefFoundError: org/apache/hadoop/util/ShutdownHookManager라는 오류가 발생하면
hadoop의 common라이브러리를 oozie설치폴더의 lib에 복사하고(/usr/local/hadoop/share/hadoop/common/hadoop-common-2.5.2.jar /usr/local/oozie/lib), oozied.sh stop, oozied.sh start하여 oozie데몬을 다시 기동시켜준다.



번호 제목 날짜 조회 수
621 select와 group by결과 값이 없는경우의 리턴 값이 다름 file 2016.02.05 210
620 service name방식의 oracle을 메타정보 저장소로 사용할때 Hue Configuration설정하는 방법 2022.02.12 210
619 JavaStreamingContext를 이용하여 스트림으로 들어오는 문자열 카운트 소스 2017.03.30 211
618 [TLS]pkcs12형식의 인증서 생성및 jks형식 인증서 생성 커맨드 예시 2022.03.15 211
617 S2RDF모듈의 실행부분만 추출하여 별도록 실행하는 방법(draft) 2016.06.14 213
616 LUBM 개수별 hadoop HDFS data사이즈 정리 2017.04.06 214
615 kudu rebalance수행 command예시 2022.01.17 218
614 --master yarn 옵션으로 spark client프로그램 실행할때 메모리 부족 오류발생시 조치방법 2016.05.27 219
613 여러가지 방법으로 특정 jar파일을 exclude하지 못하는 경우 해당 jar파일을 제외시키는 방법 2016.08.11 220
612 ./hadoop-daemon.sh start namenode로 namenode기동시 EditLog의 custerId, namespaceId가 달라서 발생하는 오류 해결방법 2016.09.24 220
611 How-to: Tune Your Apache Spark Jobs (Part 2) file 2016.10.31 221
610 LUBM 데이타 생성구문 2017.07.24 221
609 failed to read local state, exiting...오류발생시 조치사항 2016.04.06 222
608 Toree 0.1.0-incubating이 Scala 2.10.4까지만 지원하게 되어서 발생하는 NoSuchMethod오류 문제 해결방법(scala 2.11.x을 지원하지만 오류가 발생할 수 있음) 2018.04.20 223
607 대표 오픈소스 라이선스, 한 눈에 보기! 2015.12.10 226
606 ./spark-sql 실행시 "java.lang.NumberFormatException: For input string: "1s"오류발생시 조치사항 2016.06.09 226
605 protege 4.3 다운로드 2015.12.09 227
604 [Kudu]Schema별 혹은 테이블별 사용량(Replica포함) 구하는 방법 2022.07.14 228
603 [CDP7.1.7]impala-shell수행시 간헐적으로 "-k requires a valid kerberos ticket but no valid kerberos ticket found." 오류 2023.11.16 230
602 컴퓨터 무한 재부팅 원인및 조치방법 file 2017.12.05 232
위로