메뉴 건너뛰기

Cloudera, BigData, Semantic IoT, Hadoop, NoSQL

Cloudera CDH/CDP 및 Hadoop EcoSystem, Semantic IoT등의 개발/운영 기술을 정리합니다. gooper@gooper.com로 문의 주세요.


Hadoop의 각 데몬을 기동하여 정상 작동중이다가 갑자기 DataNode가 아래와 같은 오류를 내면서 죽는 경우가 있다.

원인은 Heap메모리가 부족하여 발생하는 문제이다. 이때는 아래 내용을 참조하여 HEAP사이즈를 변경하여 각서버에 반영하고 Hadoop전체를 다시

재 기동시켜서 반영해준다.

(분제가 발생하는 노드는 전체 클러스트와 메모리는 같은데 HardDisk용량이 2.5배 정도 되는데 다른 노드에 비해서 데이타 유입량이 더 많아서

동한 다른 노드와 같은 설정을 하면 이용하면서 HEAP메모리 부족현상이 발생되는것으로 보임)


1. hadoop-env.sh에서
export HADOOP_HEAPSIZE을
export HADOOP_HEAPSIZE=3000 으로 설정한다.

export HADOOP_NAMENODE_INIT_HEAPSIZE=""을
export HADOOP_NAMENODE_INIT_HEAPSIZE="2000" 으로 설정한다.


2. mapred-env.sh에서
export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000을
export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=2000 으로 설정한다.


3. yarn-env.sh에서

JAVA_HEAP_MAX=-Xmx1000m 를
JAVA_HEAP_MAX=-Xmx2000m 으로 설정한다.

# YARN_HEAPSIZE=1000을
YARN_HEAPSIZE=2000 으로 설정한다.




-----------------------------------오류내용--------------------------------
2017-07-18 20:20:38,668 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1265ms
GC pool 'PS MarkSweep' had collection(s): count=2 time=1764ms
2017-07-18 20:20:32,678 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-605282214-XXX.XXX.XXX.XXX-1498555165989:blk_1076520983_2780234 received exception java.io.IOException: Premature EOF from inputStream
2017-07-18 20:20:30,934 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-605282214-XXX.XXX.XXX.XXX-1498555165989:blk_1076520963_2780213, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2017-07-18 20:20:47,191 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-605282214-XXX.XXX.XXX.XXX-1498555165989:blk_1076520963_2780213 received exception java.io.IOException: Premature EOF from inputStream
2017-07-18 20:20:47,191 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-605282214-XXX.XXX.XXX.XXX-1498555165989 (Datanode Uuid d4f1b1f7-0636-483d-91e8-4780b73fb392) service to sda1/XXX.XXX.XXX.XXX:9000 beginning handsh
ake with NN
2017-07-18 20:20:48,073 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: sda2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /XXX.XXX.XXX.XXX:43840 dst: /XXX.XXX.XXX.XXX:50010
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:895)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:801)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
        at java.lang.Thread.run(Thread.java:745)
2017-07-18 20:20:48,073 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: sda2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /166.104.112.69:45343 dst: /XXX.XXX.XXX.XXX:50010
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:895)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:801)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
        at java.lang.Thread.run(Thread.java:745)
2017-07-18 20:20:57,083 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-605282214-XXX.XXX.XXX.XXX-1498555165989 (Datanode Uuid d4f1b1f7-0636-483d-91e8-4780b73fb392) service to sda1/XXX.XXX.XXX.XXX:9000 succe
ssfully registered with NN
2017-07-18 20:21:00,044 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1178ms
GC pool 'PS MarkSweep' had collection(s): count=2 time=1677ms
2017-07-18 20:21:05,452 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2133ms
GC pool 'PS MarkSweep' had collection(s): count=3 time=2632ms
2017-07-18 20:21:12,342 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1229ms
GC pool 'PS MarkSweep' had collection(s): count=2 time=1729ms
2017-07-18 20:21:14,056 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1214ms
GC pool 'PS MarkSweep' had collection(s): count=2 time=1713ms
2017-07-18 20:21:28,386 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected exception in block pool Block pool BP-605282214-XXX.XXX.XXX.XXX-1498555165989 (Datanode Uuid d4f1b1f7-0636-483d-91e8-4780b73fb392) service to sda1/1
66.104.112.43:9000
java.lang.OutOfMemoryError: Java heap space
2017-07-18 20:21:28,386 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-605282214-XXX.XXX.XXX.XXX-1498555165989 (Datanode Uuid d4f1b1f7-0636-483d-91e8-4780b73fb392) service to sda1/166.1
04.112.43:9000
2017-07-18 20:21:29,958 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1205ms
GC pool 'PS MarkSweep' had collection(s): count=2 time=1704ms
2017-07-18 20:21:40,231 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1431ms
GC pool 'PS MarkSweep' had collection(s): count=2 time=1931ms
2017-07-18 20:21:45,597 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 3310ms
GC pool 'PS MarkSweep' had collection(s): count=4 time=3808ms
2017-07-18 20:21:55,800 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-605282214-XXX.XXX.XXX.XXX-1498555165989 (Datanode Uuid d4f1b1f7-0636-483d-91e8-4780b73fb392)
2017-07-18 20:21:58,707 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 5945ms
GC pool 'PS MarkSweep' had collection(s): count=12 time=13105ms
2017-07-18 20:22:00,356 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Removing block pool BP-605282214-XXX.XXX.XXX.XXX-1498555165989
2017-07-18 20:22:03,000 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2017-07-18 20:22:03,001 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2017-07-18 20:22:03,002 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1085ms
GC pool 'PS MarkSweep' had collection(s): count=1 time=1233ms
2017-07-18 20:22:03,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at sda2/XXX.XXX.XXX.XXX
************************************************************/

번호 제목 날짜 조회 수
441 spark에서 hive table을 읽어 출력하는 예제 소스 2017.03.09 561
440 [2.7.2] distribute-exclude.sh사용할때 ssh 포트변경에 따른 오류발생시 조치사항 2018.01.02 560
439 [oneM2M]Ontologies used for oneM2M 2017.08.02 559
438 scan의 startrow, stoprow지정하는 방법 2015.04.08 558
437 small file 한개 파일로 만들기(text file 혹은 parquet file의 테이블) 2022.07.04 557
436 Namenode Metadata백업하는 방법 2020.02.10 556
435 Cloudera Hadoop and Spark Developer Certification 준비(참고) 2018.05.16 555
434 원보드 컴퓨터 비교표 file 2014.08.04 555
433 bash는 PS1 변수를 통해 프롬프트의 모양을 바꿀 수 있다. 2016.03.30 554
432 Kudu tablet이 FAILED일때 원인 확인 방법 2022.01.17 553
431 DataSetCreator실행시 "Illegal character in fragment at index"오류가 나는 경우 조치방안 2016.06.17 552
430 TransmitData() to failed: Network error: Recv() got EOF from remote (error 108) 오류 현상 2019.02.15 551
429 root계정으로 MariaDB설치후 mysql -u root -p로 db에 접근하여 바로 해줘야 하는일..(케릭터셑은 utf8) 2015.10.02 550
428 queryTranslator실행시 NullPointerException가 발생전에 java.lang.ArrayIndexOutOfBoundsException발생시 조치사항 2016.06.16 548
427 [TLS]TLS용 사설 인증서 변경 혹은 신규 지정시 No trusted certificate found 오류 발생시 확인및 조치사항 2022.03.15 547
426 [CDP7.1.7][Replication]Table does not match version in getMetastore(). Table view original text mismatch 2024.01.02 543
425 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable원인 2015.04.27 539
424 servlet-api를 jar형태로 build할때 포함하지 말고 java 설치 위치의 jre/lib/ext에 복사하여 사용하는것이 좋다. 2016.08.10 538
423 Cannot create /var/run/oozie/oozie.pid: Directory nonexistent오류 2014.06.03 537
422 여러 홈페이지를 운영하거나 혹은 서버에 가입한 사용자들에게 홈페이지 계정을 나누어 줄수 있도록 설정/계정 생성방법 2018.01.23 535
위로