메뉴 건너뛰기

Cloudera, BigData, Semantic IoT, Hadoop, NoSQL

Cloudera CDH/CDP 및 Hadoop EcoSystem, Semantic IoT등의 개발/운영 기술을 정리합니다. gooper@gooper.com로 문의 주세요.


아래와 같이 collection을 생성할때 Core with name 'xx_shard4_replica1' already exists가 발생하는 경우 bin/solr.in.sh을 확인하여 solr.data.dir을 명시적으로 지정하지 않도록 하고 bin/solr를 이용하여 solr instance를 기동시 solr.data.dir관련 option을 지정하지 않도록 하여 default로 생성되도록 해준다. 그렇지 않으면 아래와 같이 "/user/root/solr/data/index/write.lock for client XXX.XXX.XXX.XXX already exists" Exception이 발생하면서 collection이 생성되지 않는다.



------------------------------------오류내용---------------------------

root@gsda1:~/solr/bin# ./solr create -c gc -shards 4 -replicationFactor 2

Connecting to ZooKeeper at gsda1:2181,gsda2:2181,gsda3:2181 ...
Re-using existing configuration directory gc

Creating new collection 'gc' using command:
http://localhost:8983/solr/admin/collections?action=CREATE&name=gc&numShards=4&replicationFactor=2&maxShardsPerNode=2&collection.configName=gc


ERROR: Failed to create collection 'gc' due to: {XXX.XXX.XXX.XXX:8983_solr=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://XXX.XXX.XXX.XXX:8983/solr: Error CREATEing SolrCore 'gc_shard4_replica1': Unable to create core [gc_shard4_replica1] Caused by: /user/root/solr/data/index/write.lock for client XXX.XXX.XXX.XXX already exists
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.startFile(FSDirWriteFileOp.java:359)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2234)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2154)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:727)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
, XXX.XXX.XXX.XXX:8983_solr=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://XXX.XXX.XXX.XXX:8983/solr: Error CREATEing SolrCore 'gc_shard3_replica1': Unable to create core [gc_shard3_replica1] Caused by: /user/root/solr/data/index/write.lock for client XXX.XXX.XXX.XXX already exists
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.startFile(FSDirWriteFileOp.java:359)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2234)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2154)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:727)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
  XXX.XXX.XXX.XXX:8983_solr=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://XXX.XXX.XXX.XXX:8983/solr: Error CREATEing SolrCore 'gc_shard3_replica2': Unable to create core [gc_shard3_replica2] Caused by: /user/root/solr/data/index/write.lock for client XXX.XXX.XXX.XXX already exists
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.startFile(FSDirWriteFileOp.java:359)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2234)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2154)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:727)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
, XXX.XXX.XXX.XXX:8983_solr=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://XXX.XXX.XXX.XXX:8983/solr: Error CREATEing SolrCore 'gc_shard4_replica2': Unable to create core [gc_shard4_replica2] Caused by: /user/root/solr/data/index/write.lock for client XXX.XXX.XXX.XXX already exists
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.startFile(FSDirWriteFileOp.java:359)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2234)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2154)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:727)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
}

번호 제목 날짜 조회 수
61 apk 파일 위치 file 2015.05.25 2301
60 kafka broker기동시 brokerId가 달라서 기동에 실패하는 경우 조치방법 2016.05.02 2420
59 메이븐 (maven) 설치 및 이클립스 연동하기 file 2013.03.06 2490
58 AIX 7.1에 MariaDB 10.2 소스 설치 2016.09.24 2490
57 banana pi에 hive 0.13.1+mysql(metastore)설치 file 2014.09.09 2493
56 hadoop설치시 오류 2013.12.18 2502
55 Cacti로 Hadoop 모니터링 하기 file 2013.03.12 2505
54 hbase shell에서 컬럼값 검색하기(SingleColumnValueFilter이용) 2014.04.25 2603
53 jupyter, zeppelin, rstudio를 이용하여 spark cluster에 job를 실행시키기 위한 정보 2018.04.13 2643
52 hadoop 설치(3대) file 2013.03.07 2689
51 HBase, BigTable, Cassandra Schema Design file 2013.03.15 2699
50 Hive+mysql 설치 및 환경구축하기 file 2013.03.07 2802
49 banana pi에(lubuntu)에 hadoop설치하고 테스트하기 - 성공 file 2014.07.05 2819
48 HBase 설치하기 – Pseudo-distributed file 2013.03.12 2823
47 mysql-server 기동시 Do you already have another mysqld server running on port 오류 발생할때 확인및 조치방법 2017.05.14 2854
46 spark-sql실행시 Caused by: java.lang.NumberFormatException: For input string: "0s" 오류발생시 조치사항 2016.06.09 2870
45 org.apache.hadoop.hbase.PleaseHoldException: Master is initializing 2013.03.15 2879
44 org.apache.hadoop.security.AccessControlException: Permission denied: user=hadoop, access=WRITE, inode="":root:supergroup:rwxr-xr-x 오류 처리방법 2014.07.05 2929
43 JobHistory 서버 기동시 HDFS상에 특정 폴더를 생성할 수 없어서 기동하지 못하는 경우 조치 2018.05.29 2967
42 이클립스에서 생성한 jar 파일 hadoop 으로 실행하기 file 2013.03.06 3016
위로