메뉴 건너뛰기

Cloudera, BigData, Semantic IoT, Hadoop, NoSQL

Cloudera CDH/CDP 및 Hadoop EcoSystem, Semantic IoT등의 개발/운영 기술을 정리합니다. gooper@gooper.com로 문의 주세요.


서버작업등으로 Hadoop Cluster를 재기동하면 각 서비스를 올릴때 Kerberos가 설정된 상태에서 java.io.IOException: Could not configure server becase SASL configuration did not allow the Zookeeper server to authenticate itself properly: javax.security.auth.login.LoginException: Checksum failed가 발생하면 CM->Zookeeper->Instances->대상 instance를 선택후 "Actions for Seleced"탭에서 Regenerate Keytab을 눌러서 zookeeper.keytab을 재생성하고 다시 기동하면 정상적으로 기동된다.(다른 서비스들도 각각에 맞게 동일한 메뉴에서 keytab을 재생성할 수 있는 기능을 제공함)

(서비스(예, zookeeper/host_FQDN/Realm)용 principal들은 클라우레라가 서비스를 기동할때 만들어서 제공하므로 물리적인 위치에 파일로 존재하지 않으며 파일로 저장할 필요가 없다.)

그리고 Cloudera Manager에서 관리하고 있는 서비스 principal목록은 CM->Administration->Security->Kerberos Credentials에서 확인할 수 있다.


*참고 : Kerberos

- 설치 위치 : /var/kerberos/krb5, /var/kerberos/krb5kdc

- 데몬 : /usr/sbin/krb5kdc -P /var/run/krb5kdc.pid, /usr/sbin/kadmind -P /var/run/kadmind.pid 형태로 데몬이 뜬다.

- 관리명령어 : systemctl status krb5kdc, systemctl status kadmin


- 목록확인 

  sudo kadmin.local로 접근후 아래의 명령을 수행한다.

  listprincs


*참고2 : java의 jre/lib/security/jaas.conf파일내용

Server {

 com.sun.security.auth.module.Krb5LoginModule required

useKeyTab=true

keyTab="zookeeper.keytab"

storeKey=true

useTicketCache=false

principal="zookeeper/nod01.gooper.com@GOOPER.COM

};


*참고3 : /var/kerberos/krb5kdc/kdc.conf

[kdcdefaults]

  kdc_ports = 88

  kdc_tcp_ports = 88

[realms]

  GOOPER.COM = {

   acl_file = /var/kerberos/krb5kdc/kadm5.acl

   dict_file = /usr/share/dict/words

   admin_keytab = /var/kerberos/kerb5kdc/kadm5.keytab

   supported_enctypes = aes256-cts:normal aes128-cts:normal arcfour-hmac:normal

   max_renewable_life = 7d

   udp_preference_limit = 1

}


*참고4 : vi /etc/krb5.conf

[logging]

    default = FILE:/var/log/krb5lib.log

    kdc = FILE:/var/log/krb5kdc.log

    admin_server = FILE:/var/log/kadmin.log


[libdefaults]

    default_realm = GOOPER COM

    dns_lookup_realm = false

    dns_lookup_kdc = false

    ticket_lifetime = 24h

    renew_lifetime = 7d

    forwardable = true

    udp_preference_limit = 1

    default_tkt_enctypes = des-cbc-md5 des-cbc-crc des3-cbc-sha1

    default_tgs_enctypes = des-cbc-md5 des-cbc-crc des3-cbc-sha1

    permitted_enctypes = des-cbc-md5 des-cbc-crc des3-cbc-sha1


[kdc]

    profile = /var/kerberos/krb5kdc/kdc.conf

[realms]

    GOOPER.COM = {

        kdc = node01.gooper.com:88

        kdc = node02.gooper.com:88

        admin_server = node01.gooper.com:749

    }

    DEV.GOOPER.COM = {

        kdc = node01.dev.gooper.com:88

        kdc = node02.dev.gooper.com:88

        admin_server = node01.dev.gooper.com:749

    }


[domain_realm]

   .gooper.com = GOOPER.COM

    goopercom = GOOPER.COM

   .dev.gooper.com = DEV.GOOPER.COM

    dev.goopercom = DEV.GOOPER.COM



*참고5 : 서비스별 jaas.conf설정

https://www.cloudera.com/documentation/enterprise/5-14-x/topics/cdh_sg_zookeeper_security.html#prerequisites

 2.3.5.2. Create the JAAS Configuration Files
  1. Create the following JAAS configuration files on the HBase Master, RegionServer, and HBase client host machines.

    These files must be created under the $HBASE_CONF_DIR directory:

    where $HBASE_CONF_DIR is the directory that stores the HBase configuration files, such as /etc/hbase/conf.

    • On your HBase Master host machine, create the hbase-server.jaas file under the /etc/hbase/conf directory and add the following content:

      Server {
      com.sun.security.auth.module.Krb5LoginModule required
      useKeyTab=true
      storeKey=true
      useTicketCache=false
      keyTab="/etc/security/keytabs/hbase.service.keytab"
      principal="hbase/$HBase.Master.hostname";
      };                         
    • On each RegionServer host machine, create the regionserver.jaas file under the /etc/hbase/conf directory and add the following content:

      Server {
      com.sun.security.auth.module.Krb5LoginModule required
      useKeyTab=true
      storeKey=true
      useTicketCache=false
      keyTab="/etc/security/keytabs/hbase.service.keytab"
      principal="hbase/$RegionServer.hostname";
      };                     
    • On your HBase client machines, create the hbase-client.jaas file under the /etc/hbase/conf directory and add the following content:

      Client {
      com.sun.security.auth.module.Krb5LoginModule required
      useKeyTab=false
      useTicketCache=true;
      };                   

  2. Create the following JAAS configuration files on the ZooKeeper Server and client host machines.

    These files must be created under the $ZOOKEEPER_CONF_DIR directory:

    where $ZOOKEEPER_CONF_DIR is the directory that stores the HBase configuration files, such as /etc/zookeeper/conf.

    • On the ZooKeeper server host machines, create the zookeeper-server.jaas file under the /etc/zookeeper/conf directory and add the following content:

      Server {
      com.sun.security.auth.module.Krb5LoginModule required
      useKeyTab=true
      storeKey=true
      useTicketCache=false
      keyTab="/etc/security/keytabs/zookeeper.service.keytab"
      principal="zookeeper/$ZooKeeper.Server.hostname";
      };                         
    • On each ZooKeeper client host machine, create the zookeeper-client.jaas file under the /etc/zookeeper/conf directory and add the following content:

      Client {
      com.sun.security.auth.module.Krb5LoginModule required
      useKeyTab=false
      useTicketCache=true;
      };                 

  3. Edit the hbase-env.sh file on your HBase server to add the following information:

    export HBASE_OPTS ="-Djava.security.auth.login.config=$HBASE_CONF_DIR/hbase-client.jaas"
    export HBASE_MASTER_OPTS ="-Djava.security.auth.login.config=$HBASE_CONF_DIR/hbase-server.jaas"
    export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_CONF_DIR/regionserver.jaas"

    where HBASE_CONF_DIR is the HBase configuration directory. For example, /etc/hbase/conf.

  4. Edit the zoo.cfg file on your ZooKeeper server to add the following information:

    authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
    jaasLoginRenew=3600000
    kerberos.removeHostFromPrincipal=true
    kerberos.removeRealmFromPrincipal=true     
  5. Edit the zookeeper-env.sh file on your ZooKeeper server to add the following information:

    export SERVER_JVMFLAGS ="-Djava.security.auth.login.config=$ZOOKEEPER_CONF_DIR/zookeeper-server.jaas"
    export CLIENT_JVMFLAGS ="-Djava.security.auth.login.config=$ZOOKEEPER_CONF_DIR/zookeeper-client.jaas"

    where $ZOOKEEPER_CONF_DIR is the ZooKeeper configuration directory. For example, /etc/zookeeper/conf.

번호 제목 날짜 조회 수
741 [Ranger]RangerAdminRESTClient Error gertting pplicies; Received NULL response!!, secureMode=true, user=rangerkms/node01.gooper.com@ GOOPER.COM (auth:KERBEROS), serviceName=cm_kms 2023.06.27 73
740 [vue storefrontui]외부 API통합하기 참고 문서 2022.02.09 80
739 [Encryption Zone]Encryption Zone에 생성된 table을 select할때 HDFS /tmp/zone1에 대한 권한이 없는 경우 2023.06.29 83
738 ./gradlew :composeDown 및 ./gradlew :composeUp 를 성공했을때의 메세지 2023.02.20 84
737 [EncryptionZone]User:testuser not allowed to do "DECRYPT_EEK" on 'testkey' 2023.06.29 89
736 [vi] test.nq파일에서 특정문자열(예, <>)을 찾아서 포함되는 라인을 삭제한 동일한 이름의 파일을 만드는 방법 2017.01.25 98
735 [Impala] alter table구문수행시 "WARNINGS: Impala does not have READ_WRITE access to path 'hdfs://nameservice1/DATA/Temp/DB/source/table01_ccd'" 발생시 조치 2024.04.26 98
734 CM의 Impala->Query tab에서 FINISHED query가 보이지 않는 현상 2021.08.31 99
733 restaurant-controller,에서 등록 예시 2022.04.30 99
732 주문히스토리 조회 2022.04.30 99
731 [Hue metadata]Oracle에 있는 Hue 메타정보 테이블을 이용하여 coordinator와 workflow관계 목록을 추출하는 방법 2023.08.22 99
730 [Cloudera Agent] Metadata-Plugin throttling_logger INFO (713 skipped) Unable to send data to nav server. Will try again. 2022.05.16 103
729 oozie의 sqoop action수행시 ooize:launcher의 applicationId를 이용하여 oozie:action의 applicationId및 관련 로그를 찾는 방법 2023.07.26 104
728 [CDP7.1.6,HDFS]HDFS파일을 삭제하고 Trash비움이 완료된후에도 HDFS 공간을 차지하고 있는 경우 확인/조치 방법 2023.07.17 108
727 [CDP7.1.7, Replication]Encryption Zone내 HDFS파일을 비Encryption Zone으로 HDFS Replication시 User hdfs가 아닌 hadoop으로 수행하는 방법 2024.01.15 110
726 주문 생성 데이터 예시 2022.04.30 112
725 호출 url현황 2023.02.21 112
724 [CDP7.1.7, Hive Replication]Hive Replication진행중 "The following columns have types incompatible with the existing columns in their respective positions " 오류 2023.12.27 116
723 eclipse 3.1 단축키 정리파일 2017.01.02 118
722 [CDP7.1.7]Oozie job에서 ERROR: Kudu error(s) reported, first error: Timed out: Failed to write batch of 774 ops to tablet 8003f9a064bf4be5890a178439b2ba91가 발생하면서 쿼리가 실패하는 경우 2024.01.05 118
위로