메뉴 건너뛰기

Cloudera, BigData, Semantic IoT, Hadoop, NoSQL

Cloudera CDH/CDP 및 Hadoop EcoSystem, Semantic IoT등의 개발/운영 기술을 정리합니다. gooper@gooper.com로 문의 주세요.


hive index생성, 삭제, 활용

총관리자 2014.04.25 16:41 조회 수 : 1765

1. index설정

hive> create index h_price_info_index on table h_price_info (key_id) as 'COMPACT' WITH DEFERRED REBUILD;
OK
Time taken: 6.898 seconds

 

2. index 생성 정보 확인
hive> show formatted index on h_price_info;
OK
idx_name             tab_name             col_names            idx_tab_name         idx_type             comment             
h_price_info_index h_price_info key_id default__h_price_info_h_price_info_index__ compact
Time taken: 0.402 seconds, Fetched: 4 row(s)


3. index를 물리적으로 생성함
hive> alter index h_price_info_index on h_price_info rebuild;

--> 아래와 같은 오류가 발생할 수 있는데.. 아래와 같이 hive 실행시 libpath를 지정하고 실행한다.

(hive --auxpath /home/hadoop/hive/lib/hbase-0.94.6.1.jar,/home/hadoop/hive/lib/zookeeper-3.4.3.jar,/home/hadoop/hive/lib/hive-hbase-handler-0.11.0.jar,/home/hadoop/hive/lib/guava-11.0.2.jar,/home/hadoop/hive/lib/hive-contrib-0.11.0.jar -hiveconf hbase.master=localhost:60000 )

 

Total MapReduce jobs = 1

----------------------오류내용----------------------
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201404241444_0032, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201404241444_0032
Kill Command = /home/hadoop/hadoop/libexec/../bin/hadoop job  -kill job_201404241444_0032
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
2014-04-25 16:39:27,621 Stage-1 map = 0%,  reduce = 0%
2014-04-25 16:40:22,624 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201404241444_0032 with errors
Error during job, obtaining debugging information...
Job Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201404241444_0032
Examining task ID: task_201404241444_0032_m_000003 (and more) from job job_201404241444_0032

Task with the most failures(4):
-----
Task ID:
  task_201404241444_0032_m_000000

URL:
  http://localhost:50030/taskdetails.jsp?jobid=job_201404241444_0032&tipid=task_201404241444_0032_m_000000
-----
Diagnostic Messages for this Task:
java.io.IOException: Cannot create an instance of InputSplit class = org.apache.hadoop.hive.hbase.HBaseSplit:org.apache.hadoop.hive.hbase.HBaseSplit
 at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:146)
 at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
 at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
 at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:390)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:406)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
 at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.hbase.HBaseSplit
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
 at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:143)
 ... 10 more


FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 2  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
--------------------------------------------------------------------------------------------

 

4. index사용설정

set hive.optimize.autoindex=true;

 

5. 쿼리수행

 select * from h_price_info where key_id like '%고추%'

 

근데 속도가 빠른건지.. 모르겠다...

번호 제목 날짜 조회 수
27 Caused by: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D오류발생시 조치사항 2016.06.03 1246
26 hive 2.0.1 설치및 mariadb로 metastore 설정 2016.06.03 5292
25 CDH 5.4.4 버전에서 hive on tez (0.7.0)설치하기 2016.01.14 290
24 Tracking URL = N/A 가발생하는 경우 - 환경설정값을 잘못설정하는 경우에 발생함 2015.06.17 687
23 java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error: Unable to deserialize reduce input key from...오류해결방법 2015.06.16 1968
22 hive 0.13.1 설치 + meta정보는 postgresql 9.3에 저장 2015.04.30 589
21 lateral view 예제 2014.09.18 773
20 banana pi에 hive 0.13.1+mysql(metastore)설치 file 2014.09.09 2485
19 FAILED: IllegalStateException Variable substitution depth too large: 40 오류발생시 조치사항 2014.08.19 1595
18 hive job실행시 meta정보를 원격의 mysql에 저장하는 경우 설정방법 2014.05.28 1161
17 hive query에서 mapreduce돌리지 않고 select하는 방법 2014.05.23 896
16 hiverserver2기동시 connection refused가 발생하는 경우 조치방법 2014.05.22 1558
15 hive에서 insert overwrite directory.. 로 하면 default column구분자는 'SOH'혹은 't'가 됨 2014.05.20 1120
14 dual table만들기 2014.05.16 1099
13 insert hbase by hive ... error occured after 5 hours..HMaster가 뜨지 않는 장애에 대한 복구 방법 2014.04.29 7236
» index생성, 삭제, 활용 2014.04.25 1765
11 unique한 값 생성 2014.04.25 1118
10 sequence한 번호 생성방법 2014.04.25 1252
9 json serde사용법 2014.04.17 1196
8 json 값 다루기 2014.04.17 1325
위로