메뉴 건너뛰기

Bigdata, Semantic IoT, Hadoop, NoSQL

Bigdata, Hadoop ecosystem, Semantic IoT등의 프로젝트를 진행중에 습득한 내용을 정리하는 곳입니다.
필요한 분을 위해서 공개하고 있습니다. 문의사항은 gooper@gooper.com로 메일을 보내주세요.


1. hive table(file을 바라보고 있으며 hbase table(아래의 hbase_mytable)에 값을 넣기 위한 src table) 을  external로  table 생성

CREATE EXTERNAL TABLE IF NOT EXISTS external_file
     (
     FOO STRING,
     BAR STRING
     )
     COMMENT 'TEST TABLE OF EMP_IP_TABLE'
     ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
     STORED AS TEXTFILE LOCATION '/data';

 

* /data에 들어 있는 파일 내용

hadoop@bigdata-host:~/hive/conf$ hadoop fs -cat /data/external_file.txt
a,b
a1,b1
a2,b2
a3,b3

2.hive table( hbase  table을 바라보는 테이블)생성

CREATE EXTERNAL TABLE hbase_mytable(table_id string, foo string, bar string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf:foo,cf:bar")
TBLPROPERTIES("hbase.table.name" = "mytable");

 

3. hive기동시 아래와 같이 jar를 포함해준다.

adoop@bigdata-host:~/hive/bin$ hive --auxpath /home/hadoop/hive/lib/hbase-0.94.6.1.jar,/home/hadoop/hive/lib/zookeeper-3.4.3.jar,/home/hadoop/hive/lib/hive-hbase-handler-0.11.0.jar,/home/hadoop/hive/lib/guava-11.0.2.jar,/home/hadoop/hive/lib/hive-contrib-0.11.0.jar -hiveconf hbase.master=localhost:60000

4. hive에 들어가서.. table을 생성한 후 hbase table에 입력 실행결과......

hive> insert into table hbase_mytable select foo, foo, bar from external_file;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201404111158_0008, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201404111158_0008
Kill Command = /home/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job  -kill job_201404111158_0008
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
2014-04-11 13:56:49,322 Stage-0 map = 0%,  reduce = 0%
2014-04-11 13:56:55,482 Stage-0 map = 100%,  reduce = 0%, Cumulative CPU 1.74 sec
2014-04-11 13:56:56,500 Stage-0 map = 100%,  reduce = 0%, Cumulative CPU 1.74 sec
2014-04-11 13:56:57,518 Stage-0 map = 100%,  reduce = 0%, Cumulative CPU 1.74 sec
2014-04-11 13:56:58,541 Stage-0 map = 100%,  reduce = 0%, Cumulative CPU 1.74 sec
2014-04-11 13:56:59,561 Stage-0 map = 100%,  reduce = 0%, Cumulative CPU 1.74 sec
2014-04-11 13:57:00,587 Stage-0 map = 100%,  reduce = 100%, Cumulative CPU 1.74 sec
MapReduce Total cumulative CPU time: 1 seconds 740 msec
Ended Job = job_201404111158_0008
4 Rows loaded to hbase_mytable
MapReduce Jobs Launched:
Job 0: Map: 1   Cumulative CPU: 1.74 sec   HDFS Read: 220 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 740 msec
OK
Time taken: 34.565 seconds

5. hbase_mytable의 값  확인(기존에 있던 값하고 새로 추가된 값이 같이 보인다.)

hive> select * from hbase_mytable;                                           
OK
2.5 1.3 NULL
a a b
a1 a1 b1
a2 a2 b2
a3 a3 b3
second 3 NULL
third NULL 3.14159
Time taken: 1.315 seconds, Fetched: 7 row(s)

6. hbase shell에서 확인

hbase(main):001:0> scan 'mytable'
ROW                        COLUMN+CELL                                                               
 2.5                       column=cf:foo, timestamp=1397112248576, value=1.3                         
 a                         column=cf:bar, timestamp=1397192214568, value=b                           
 a                         column=cf:foo, timestamp=1397192214568, value=a                           
 a1                        column=cf:bar, timestamp=1397192214568, value=b1                          
 a1                        column=cf:foo, timestamp=1397192214568, value=a1                          
 a2                        column=cf:bar, timestamp=1397192214568, value=b2                          
 a2                        column=cf:foo, timestamp=1397192214568, value=a2                          
 a3                        column=cf:bar, timestamp=1397192214568, value=b3                          
 a3                        column=cf:foo, timestamp=1397192214568, value=a3                          
 first                     column=cf:message, timestamp=1397109873612, value=hellp Hbase             
 second                    column=cf:foo, timestamp=1397112803662, value=3                           
 second2                   column=cf:foo2, timestamp=1397112883691, value=3                          
 third                     column=cf:bar, timestamp=1397109940598, value=3.14159                     
9 row(s) in 1.8090 seconds

 

 

번호 제목 글쓴이 날짜 조회 수
30 "Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources"오류 발생시 조치사항 총관리자 2016.05.25 1031
29 Hive Query Examples from test code (1 of 2) 총관리자 2014.03.26 1050
28 impala 설치/설정 총관리자 2016.06.03 1055
27 external partition table생성및 data확인 총관리자 2014.04.03 1072
26 hive job실행시 meta정보를 원격의 mysql에 저장하는 경우 설정방법 총관리자 2014.05.28 1090
25 spark stream처리할때 두개의 client프로그램이 동일한 checkpoint로 접근할때 발생하는 오류 내용 총관리자 2018.01.16 1116
24 sequence한 번호 생성방법 총관리자 2014.04.25 1124
23 json serde사용법 총관리자 2014.04.17 1140
22 Caused by: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D오류발생시 조치사항 총관리자 2016.06.03 1150
21 upsert구현방법(년-월-일 파티션을 기준으로) 및 테스트 script file 총관리자 2018.07.03 1224
20 json 값 다루기 총관리자 2014.04.17 1224
19 schema설정없이 hive를 최초에 실행했을때 발생하는 오류메세지및 처리방법 총관리자 2016.09.25 1236
18 hiverserver2기동시 connection refused가 발생하는 경우 조치방법 총관리자 2014.05.22 1479
17 FAILED: IllegalStateException Variable substitution depth too large: 40 오류발생시 조치사항 총관리자 2014.08.19 1524
16 impald에서 idle_query_timeout 와 idle_session_timeout 구분 총관리자 2021.05.20 1643
15 index생성, 삭제, 활용 총관리자 2014.04.25 1702
» hive에서 생성된 external table에서 hbase의 table에 값 insert하기 총관리자 2014.04.11 1754
13 java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error: Unable to deserialize reduce input key from...오류해결방법 총관리자 2015.06.16 1851
12 Spark 2.1.1 clustering(5대) 설치(YARN기반) 총관리자 2016.04.22 1890
11 VisualVM 1.3.9을 이용한 spark-submit JVM 모니터링을 위한 설정및 spark-submit실행 옵션 총관리자 2016.10.28 1896

A personal place to organize information learned during the development of such Hadoop, Hive, Hbase, Semantic IoT, etc.
We are open to the required minutes. Please send inquiries to gooper@gooper.com.

위로