메뉴 건너뛰기

Cloudera, BigData, Semantic IoT, Hadoop, NoSQL

Cloudera CDH/CDP 및 Hadoop EcoSystem, Semantic IoT등의 개발/운영 기술을 정리합니다. gooper@gooper.com로 문의 주세요.


-----아래는 S2RDF가 제시한 *.py프로그램(테스트 목적으로만 만드어져 있어서 실제 사용할 수 없음)을 사용하지 않고 직접 java나 spark-submit을 호출하여 작업하는 방법을 기록함

--------------------DataSetCreator(동록하려는 data(test2.nq)는 HDFS상의 s2rdf폴더 밑에 존재 해야함, /home/hadoop/DataSetCreator에서 실행)-------------------------------------
1. Generate Vertical Partitioning
$HOME/spark/bin/spark-submit --driver-memory 1g --class runDriver --master yarn  --executor-memory 1g --deploy-mode cluster ./datasetcreator_2.10-1.1.jar s2rdf/ test2.nq VP 0.2
==> /tmp/stat_vp.txt가 만들어짐

2. Generate Exteded Vertical Partitioning subset SO
$HOME/spark/bin/spark-submit --driver-memory 1g --class runDriver --master yarn  --executor-memory 1g --deploy-mode cluster ./datasetcreator_2.10-1.1.jar s2rdf/ test2.nq SO 0.2 
==> /tmp/stat_so.txt가 만들어짐

3. Generate Exteded Vertical Partitioning subset OS
$HOME/spark/bin/spark-submit --driver-memory 1g --class runDriver --master yarn  --executor-memory 1g --deploy-mode cluster ./datasetcreator_2.10-1.1.jar s2rdf/ test2.nq OS 0.2
==> /tmp/stat_os.txt가 만들어짐

4. Generate Exteded Vertical Partitioning subset SS
$HOME/spark/bin/spark-submit --driver-memory 1g --class runDriver --master yarn  --executor-memory 1g --deploy-mode cluster ./datasetcreator_2.10-1.1.jar s2rdf/ test2.nq SS 0.2
==> /tmp/stat_ss.txt가 만들어짐




----------------------QueryTranslator(data/하위 파일(DataSetCreator과정을 통해서 만들어짐)은 모두 OS파일로 존재함), /home/hadoop/QueryTranslator/S2RDF_QueryTranslator에서 실행)------------------------
java -jar /home/hadoop/QueryTranslator/S2RDF_QueryTranslator/queryTranslator-1.1.jar -i data/sparql.in -o data/sparql.in -sd data/statistics/ -sUB 0.2
===>
VP STAT Size = 86
OS STAT Size = 353
SO STAT Size = 353
SS STAT Size = 1702
THE NUMBER OF ALL SAVED (< ScaleUB) TRIPLES IS -> 1311014421
THE NUMBER OF ALL SAVED (< ScaleUB) TABLES IS -> 2127
TABLE-><gr__offers>
TABLE-><foaf__homepage>
TABLE-><sorg__author>
TABLE-><wsdbm__friendOf>
TABLE-><wsdbm__likes>
TABLE-><sorg__language>
TABLE-><rev__hasReview>
TABLE-><rev__reviewer>
TABLE-><wsdbm__follows>
TABLE-><gr__includes>


* QueryTranslator실행 위치의 폴더구조(여기서 실행해도 실제 사용되는 것은 data폴더와 queryTranslator-1.1.jar파일이다.)
-bash-4.1$ ll
합계 20192
-rw-rw-r--. 1 hadoop hadoop        0 2016-06-13 15:30 HiveSPARQL_error.log
drwxrwxr-x. 3 hadoop hadoop     4096 2016-06-13 15:36 data
drwxrwxr-x. 2 hadoop hadoop     4096 2016-05-26 18:46 lib
-rw-rw-r--. 1 hadoop hadoop 20661741 2016-04-04 22:34 queryTranslator-1.1.jar
drwxrwxr-x. 3 hadoop hadoop     4096 2016-05-26 18:46 src
-bash-4.1$ ll -R data
data:
합계 16
-rw-rw-r--. 1 hadoop hadoop    0 2016-06-13 15:28 HiveSPARQL_error.log
-rw-rw-r--. 1 hadoop hadoop  730 2015-08-17 17:07 sparql.in
-rw-rw-r--. 1 hadoop hadoop 1821 2016-06-13 15:36 sparql.in.log
-rw-rw-r--. 1 hadoop hadoop 1889 2016-06-13 15:36 sparql.in.sql
drwxrwxr-x. 2 hadoop hadoop 4096 2016-05-26 18:46 statistics

data/statistics:
합계 132
-rw-rw-r--. 1 hadoop hadoop 19129 2015-08-17 17:07 stat_os.txt
-rw-rw-r--. 1 hadoop hadoop 18910 2015-08-17 17:07 stat_so.txt
-rw-rw-r--. 1 hadoop hadoop 89774 2015-08-17 17:07 stat_ss.txt
-rw-rw-r--. 1 hadoop hadoop  3419 2015-08-17 17:07 stat_vp.txt


=====새로운 QueryTranslator=======>
java -jar /home/hadoop/QueryTranslator/S2RDF_QueryTranslator/queryTranslator-1.1.jar -i ./test2/test2.sparql -o ./test2/test2.sparql -sd ./test2/statistics/ -sUB 0.2

-bash-4.1$ mkdir ./test2/statistics
-bash-4.1$ touch ./test2/statistics/stat_vp.txt
-bash-4.1$ touch ./test2/statistics/stat_os.txt
-bash-4.1$ touch ./test2/statistics/stat_so.txt
-bash-4.1$ touch ./test2/statistics/stat_ss.txt
==> ./test2/폴더 밑에 test2.sparql.sql로 sql파일이 생성됨




------------------------QueryExecutor(/home/hadoop/QueryExecutor에서 실행)--------------------------------------
$HOME/spark/bin/spark-submit --driver-memory 2g --class runDriver --master yarn  --executor-memory 1g --deploy-mode cluster --files ./IL5-1-U-1--SO-OS-SS-VP__WatDiv1M.sql ./queryexecutor_2.10-1.1.jar WatDiv1M IL5-1-U-1--SO-OS-SS-VP__WatDiv1M.sql > ./QueryExecutor.err


$HOME/spark/bin/spark-submit --driver-memory 2g --class runDriver --master yarn  --executor-memory 1g --deploy-mode cluster --files /home/hadoop/QueryTranslator/S2RDF_QueryTranslator/data/sparql.in.sql ./queryexecutor_2.10-1.1.jar s2rdf sparql.in.sql


$HOME/spark/bin/spark-submit --driver-memory 2g --class runDriver --master yarn  --executor-memory 1g --deploy-mode cluster --files ./sparql.in__s2rdf.sql ./queryexecutor_2.10-1.1.jar s2rdf sparql.in__s2rdf.sql


---------새로운 QueryExecutor---------------------------------------------------------
$HOME/spark/bin/spark-submit --driver-memory 2g --class runDriver --master yarn  --executor-memory 1g --deploy-mode cluster --files /home/hadoop/QueryExecutor/test2/test2.sparql.sql ./queryexecutor_2.10-1.1.jar s2rdf test2.sparql.sql
==> select 결과값이 /tmp/table명/results.txt와 table명/resultTimes.txt파일로 생성됨
위로