Cloudera CDH/CDP 및 Hadoop EcoSystem, Semantic IoT등의 개발/운영 기술을 정리합니다. gooper@gooper.com로 문의 주세요.
1. hive table(file을 바라보고 있으며 hbase table(아래의 hbase_mytable)에 값을 넣기 위한 src table) 을 external로 table 생성
CREATE EXTERNAL TABLE IF NOT EXISTS external_file
(
FOO STRING,
BAR STRING
)
COMMENT 'TEST TABLE OF EMP_IP_TABLE'
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
STORED AS TEXTFILE LOCATION '/data';
* /data에 들어 있는 파일 내용
hadoop@bigdata-host:~/hive/conf$ hadoop fs -cat /data/external_file.txt
a,b
a1,b1
a2,b2
a3,b3
2.hive table( hbase table을 바라보는 테이블)생성
CREATE EXTERNAL TABLE hbase_mytable(table_id string, foo string, bar string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf:foo,cf:bar")
TBLPROPERTIES("hbase.table.name" = "mytable");
3. hive기동시 아래와 같이 jar를 포함해준다.
adoop@bigdata-host:~/hive/bin$ hive --auxpath /home/hadoop/hive/lib/hbase-0.94.6.1.jar,/home/hadoop/hive/lib/zookeeper-3.4.3.jar,/home/hadoop/hive/lib/hive-hbase-handler-0.11.0.jar,/home/hadoop/hive/lib/guava-11.0.2.jar,/home/hadoop/hive/lib/hive-contrib-0.11.0.jar -hiveconf hbase.master=localhost:60000
4. hive에 들어가서.. table을 생성한 후 hbase table에 입력 실행결과......
hive> insert into table hbase_mytable select foo, foo, bar from external_file;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201404111158_0008, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201404111158_0008
Kill Command = /home/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job -kill job_201404111158_0008
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
2014-04-11 13:56:49,322 Stage-0 map = 0%, reduce = 0%
2014-04-11 13:56:55,482 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.74 sec
2014-04-11 13:56:56,500 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.74 sec
2014-04-11 13:56:57,518 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.74 sec
2014-04-11 13:56:58,541 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.74 sec
2014-04-11 13:56:59,561 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 1.74 sec
2014-04-11 13:57:00,587 Stage-0 map = 100%, reduce = 100%, Cumulative CPU 1.74 sec
MapReduce Total cumulative CPU time: 1 seconds 740 msec
Ended Job = job_201404111158_0008
4 Rows loaded to hbase_mytable
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 1.74 sec HDFS Read: 220 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 740 msec
OK
Time taken: 34.565 seconds
5. hbase_mytable의 값 확인(기존에 있던 값하고 새로 추가된 값이 같이 보인다.)
hive> select * from hbase_mytable;
OK
2.5 1.3 NULL
a a b
a1 a1 b1
a2 a2 b2
a3 a3 b3
second 3 NULL
third NULL 3.14159
Time taken: 1.315 seconds, Fetched: 7 row(s)
6. hbase shell에서 확인
hbase(main):001:0> scan 'mytable'
ROW COLUMN+CELL
2.5 column=cf:foo, timestamp=1397112248576, value=1.3
a column=cf:bar, timestamp=1397192214568, value=b
a column=cf:foo, timestamp=1397192214568, value=a
a1 column=cf:bar, timestamp=1397192214568, value=b1
a1 column=cf:foo, timestamp=1397192214568, value=a1
a2 column=cf:bar, timestamp=1397192214568, value=b2
a2 column=cf:foo, timestamp=1397192214568, value=a2
a3 column=cf:bar, timestamp=1397192214568, value=b3
a3 column=cf:foo, timestamp=1397192214568, value=a3
first column=cf:message, timestamp=1397109873612, value=hellp Hbase
second column=cf:foo, timestamp=1397112803662, value=3
second2 column=cf:foo2, timestamp=1397112883691, value=3
third column=cf:bar, timestamp=1397109940598, value=3.14159
9 row(s) in 1.8090 seconds