메뉴 건너뛰기

Cloudera, BigData, Semantic IoT, Hadoop, NoSQL

Cloudera CDH/CDP 및 Hadoop EcoSystem, Semantic IoT등의 개발/운영 기술을 정리합니다. gooper@gooper.com로 문의 주세요.


오류가 발생하면 오류의 아래 부분에 제시하는데로 yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb 혹은(또는) yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb의 값을 증가 시켜 지정해준다.


------------오류내용

18/06/08 14:56:48 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!

18/06/08 14:56:48 ERROR util.Utils: Uncaught exception in thread main

java.lang.NullPointerException

        at org.apache.spark.network.shuffle.ExternalShuffleClient.close(ExternalShuffleClient.java:152)

        at org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1338)

        at org.apache.spark.SparkEnv.stop(SparkEnv.scala:97)

        at org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1786)

        at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1221)

        at org.apache.spark.SparkContext.stop(SparkContext.scala:1785)

        at org.apache.spark.SparkContext.<init>(SparkContext.scala:610)

        at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1022)

        at $line3.$read$$iwC$$iwC.<init>(<console>:15)

        at $line3.$read$$iwC.<init>(<console>:25)

        at $line3.$read.<init>(<console>:27)

        at $line3.$read$.<init>(<console>:31)

        at $line3.$read$.<clinit>(<console>)

        at $line3.$eval$.<init>(<console>:7)

        at $line3.$eval$.<clinit>(<console>)

        at $line3.$eval.$print(<console>)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorI


.....

java.lang.IllegalArgumentException: Required executor memory (1024+384 MB) is above the max threshold (1024 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.

        at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:281)

        at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:140)

        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)

        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:151)

        at org.apache.spark.SparkContext.<init>(SparkContext.scala:538)

        at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1022)

        at $iwC$$iwC.<init>(<console>:15)

        at $iwC.<init>(<console>:25)

        at <init>(<console>:27)

        at .<init>(<console>:31)

        at .<clinit>(<console>)

        at .<init>(<console>:7)

        at .<clinit>(<console>)

        at $print(<console>)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1045)

        at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1326)

        at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:821)

        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:852)

        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:800)

        at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)

        at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)

        at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)

        at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)

        at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)

        at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:305)

        at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)

        at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)


번호 제목 날짜 조회 수
741 [Ranger]RangerAdminRESTClient Error gertting pplicies; Received NULL response!!, secureMode=true, user=rangerkms/node01.gooper.com@ GOOPER.COM (auth:KERBEROS), serviceName=cm_kms 2023.06.27 73
740 [vue storefrontui]외부 API통합하기 참고 문서 2022.02.09 80
739 [Encryption Zone]Encryption Zone에 생성된 table을 select할때 HDFS /tmp/zone1에 대한 권한이 없는 경우 2023.06.29 83
738 ./gradlew :composeDown 및 ./gradlew :composeUp 를 성공했을때의 메세지 2023.02.20 84
737 [EncryptionZone]User:testuser not allowed to do "DECRYPT_EEK" on 'testkey' 2023.06.29 89
736 [vi] test.nq파일에서 특정문자열(예, <>)을 찾아서 포함되는 라인을 삭제한 동일한 이름의 파일을 만드는 방법 2017.01.25 98
735 [Impala] alter table구문수행시 "WARNINGS: Impala does not have READ_WRITE access to path 'hdfs://nameservice1/DATA/Temp/DB/source/table01_ccd'" 발생시 조치 2024.04.26 98
734 CM의 Impala->Query tab에서 FINISHED query가 보이지 않는 현상 2021.08.31 99
733 restaurant-controller,에서 등록 예시 2022.04.30 99
732 주문히스토리 조회 2022.04.30 99
731 [Hue metadata]Oracle에 있는 Hue 메타정보 테이블을 이용하여 coordinator와 workflow관계 목록을 추출하는 방법 2023.08.22 99
730 [Cloudera Agent] Metadata-Plugin throttling_logger INFO (713 skipped) Unable to send data to nav server. Will try again. 2022.05.16 103
729 oozie의 sqoop action수행시 ooize:launcher의 applicationId를 이용하여 oozie:action의 applicationId및 관련 로그를 찾는 방법 2023.07.26 104
728 [CDP7.1.6,HDFS]HDFS파일을 삭제하고 Trash비움이 완료된후에도 HDFS 공간을 차지하고 있는 경우 확인/조치 방법 2023.07.17 108
727 [CDP7.1.7, Replication]Encryption Zone내 HDFS파일을 비Encryption Zone으로 HDFS Replication시 User hdfs가 아닌 hadoop으로 수행하는 방법 2024.01.15 110
726 주문 생성 데이터 예시 2022.04.30 112
725 호출 url현황 2023.02.21 112
724 [CDP7.1.7, Hive Replication]Hive Replication진행중 "The following columns have types incompatible with the existing columns in their respective positions " 오류 2023.12.27 116
723 eclipse 3.1 단축키 정리파일 2017.01.02 118
722 [CDP7.1.7]Oozie job에서 ERROR: Kudu error(s) reported, first error: Timed out: Failed to write batch of 774 ops to tablet 8003f9a064bf4be5890a178439b2ba91가 발생하면서 쿼리가 실패하는 경우 2024.01.05 118
위로