Cloudera CDH/CDP 및 Hadoop EcoSystem, Semantic IoT등의 개발/운영 기술을 정리합니다. gooper@gooper.com로 문의 주세요.
출처 : http://www.spikyjohn.com/cribsheets/20130609_hadoopinstall.html
Just the command lines to get hadoop 2 installed on Ubuntu. These are all cribbed from the following source notes, and I am preserving them here for my own benefit so I can quickly repeat what I did. Note many of these instructions are also in the main hadoop docs from apache.
Source material |
Use Michael-noll's guide for version 1 & ssh Or this one for Hadoop 2 |
Create the hadoop user and ssh |
sudo apt-get install openssh-server openssh-client sudo addgroup hadoop If you cannot ssh to localhost without a passphrase, execute the following
commands: Testing your SSH |
Get hadoop all set up |
As the hduser, after downloading the tar tar -xvf hadoop-2.0.5-alpha.tar.gz export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
|
Stolen entirely from JJ, but with path changed for my Ubuntu |
Stolen from http://jugnu-life.blogspot.com/2012/05/hadoop-20-install-tutorial-023x.html Please click on his blog. Login again so bash has paths above. In Hadoop 2.x version /etc/hadoop is the default conf directory. We need to modify / create following property files in the /etc/hadoop directory cd ~ Edit core-site.xml with following contents <configuration> Edit hdfs-site.xml with following contents <configuration> <property> <property> <property> </configuration> The path Path should be specified as URI <configuration> <property> <property> </configuration> The path file:/home/hduser/workspace/hadoop_space/hadoop23/mapred/system
AND Path should be specified as URI Edit yarn-site.xml with following contents <configuration> Format the namenode # hdfs namenode –format Say Yes and let it complete the format Time to start the daemons # hadoop-daemon.sh start namenode You can also start both of them together by # start-dfs.sh Start Yarn Daemons # yarn-daemon.sh start resourcemanager You can also start all yarn daemons together by # start-yarn.sh Time to check if Daemons have started Enter the command # jps Time to launch UI Open the localhost:8088 to see the Resource Manager page Done :) Happy Hadooping :) |
댓글 0
번호 | 제목 | 날짜 | 조회 수 |
---|---|---|---|
42 | json 값 다루기 | 2014.04.17 | 1425 |
41 | 통계자료 구할수 있는 곳 | 2014.04.16 | 2044 |
40 | column family삭제시 Column family 'delete' does not exist오류 발생하는 경우 | 2014.04.14 | 1058 |
39 | hive에서 생성된 external table에서 hbase의 table에 값 insert하기 | 2014.04.11 | 1865 |
38 | Oozie 설치, 환경설정 및 테스트 | 2014.04.08 | 1737 |
37 | 다수의 로그 에이전트로 부터 로그를 받아 각각의 파일로 저장하는 방법(interceptor및 multiplexing) | 2014.04.04 | 4202 |
36 | external partition table생성및 data확인 | 2014.04.03 | 1590 |
35 | 동일서버에서 LA와 LC동시에 기동하여 테스트 | 2014.04.01 | 1105 |
34 | 의사분산모드에서 presto설치하기 | 2014.03.31 | 3306 |
33 | Hive Query Examples from test code (2 of 2) | 2014.03.26 | 11535 |
32 | Hive Query Examples from test code (1 of 2) | 2014.03.26 | 1389 |
31 | hadoop설치시 오류 | 2013.12.18 | 2731 |
30 | centsOS vsftpd설치하기 | 2013.12.17 | 1935 |
» | ubuntu에 hadoop 2.0.5설치하기 | 2013.12.16 | 2011 |
28 | centos 5.X에 hadoop 2.0.5 alpha 설치 | 2013.12.16 | 1739 |
27 | hbase에 필요한 jar들 | 2013.04.01 | 2247 |
26 | Hive java connection 설정 | 2013.04.01 | 2310 |
25 | Hbase Shell 명령 정리 | 2013.04.01 | 3466 |
24 | HBASE Client API : 기본 기능 정리 | 2013.04.01 | 3781 |
23 | 하둡 분산 파일 시스템을 기반으로 색인하고 검색하기 | 2013.03.15 | 5763 |