메뉴 건너뛰기

Cloudera, BigData, Semantic IoT, Hadoop, NoSQL

Cloudera CDH/CDP 및 Hadoop EcoSystem, Semantic IoT등의 개발/운영 기술을 정리합니다. gooper@gooper.com로 문의 주세요.


기타 ubuntu에 hadoop 2.0.5설치하기

총관리자 2013.12.16 22:09 조회 수 : 1979

출처 : http://www.spikyjohn.com/cribsheets/20130609_hadoopinstall.html

 

Just the command lines to get hadoop 2 installed on Ubuntu. These are all cribbed from the following source notes, and I am preserving them here for my own benefit so I can quickly repeat what I did. Note many of these instructions are also in the main hadoop docs from apache.

Source material

Use Michael-noll's guide for version 1 & ssh
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html

Or this one for Hadoop 2
http://jugnu-life.blogspot.com/2012/05/hadoop-20-install-tutorial-023x.html
http://hadoop.apache.org/docs/r2.0.5-alpha/

Create the hadoop user and ssh

sudo apt-get install openssh-server openssh-client

sudo addgroup hadoop
sudo adduser --ingroup hadoop hduser
su - hduser

If you cannot ssh to localhost without a passphrase, execute the following commands:
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Testing your SSH
ssh localhost
Say yes
#exit

Get hadoop all set up

As the hduser, after downloading the tar

tar -xvf hadoop-2.0.5-alpha.tar.gz
ln -s hadoop-2.0.5-alpha hadoop
#edit .bashrc
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_21/
export HADOOP_PREFIX="/home/hduser/hadoop"
export PATH=$PATH:$HADOOP_PREFIX/bin
export PATH=$PATH:$HADOOP_PREFIX/sbin

export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
export HADOOP_COMMON_HOME=${HADOOP_PREFIX}
export HADOOP_HDFS_HOME=${HADOOP_PREFIX}
export YARN_HOME=${HADOOP_PREFIX}

Stolen entirely from JJ, but with path changed for my Ubuntu

Stolen from http://jugnu-life.blogspot.com/2012/05/hadoop-20-install-tutorial-023x.html Please click on his blog.

Login again so bash has paths above. In Hadoop 2.x version /etc/hadoop is the default conf directory. We need to modify / create following property files in the /etc/hadoop directory

cd ~
mkdir -p /home/hduser/workspace/hadoop_space/hadoop23/dfs/name;mkdir -p /home/hduser/workspace/hadoop_space/hadoop23/dfs/data;mkdir -p /home/hduser/workspace/hadoop_space/hadoop23/mapred/system;mkdir -p /home/hduser/workspace/hadoop_space/hadoop23/mapred/local

Edit core-site.xml with following contents

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:8020</value>
<description>The name of the default file system. Either the literal string "local" or a host:port for NDFS.</description>
<final>true</final>
</property>
</configuration>

Edit hdfs-site.xml with following contents

<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hduser/workspace/hadoop_space/hadoop23/dfs/name</value>
<description>Determines where on the local filesystem the DFS name node
should store the name table. If this is a comma-delimited list of directories then the name table is replicated in all of the
directories, for redundancy. </description>
<final>true</final>
</property>

<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hduser/workspace/hadoop_space/hadoop23/dfs/data</value>
<description>Determines where on the local filesystem an DFS data node
should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named
directories, typically on different devices. Directories that do not exist are ignored.
</description>
<final>true</final>
</property>

<property>
<name>dfs.replication</name>
<value>1</value>
</property>

<property>
<name>dfs.permissions</name>
<value>false</value>
</property>

</configuration>

The path
file:/home/hduser/workspace/hadoop_space/hadoop23/dfs/name AND
file:/home/hduser/workspace/hadoop_space/hadoop23/dfs/data
are some folders in your computer which would give space to store data and name edit files

Path should be specified as URI
Create a file mapred-site.xml inside /etc/hadoop with following contents

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

<property>
<name>mapred.system.dir</name>
<value>file:/home/hduser/workspace/hadoop_space/hadoop23/mapred/system</value>
<final>true</final>
</property>

<property>
<name>mapred.local.dir</name>
<value>file:/home/hduser/workspace/hadoop_space/hadoop23/mapred/local</value>
<final>true</final>
</property>

</configuration>

The path

file:/home/hduser/workspace/hadoop_space/hadoop23/mapred/system AND
file:/home/hduser/workspace/hadoop_space/hadoop23/mapred/local
are some folders in your computer which would give space to store data

Path should be specified as URI

Edit yarn-site.xml with following contents

<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce.shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>

Format the namenode

# hdfs namenode –format

Say Yes and let it complete the format

Time to start the daemons

# hadoop-daemon.sh start namenode
# hadoop-daemon.sh start datanode

You can also start both of them together by

# start-dfs.sh

Start Yarn Daemons

# yarn-daemon.sh start resourcemanager
# yarn-daemon.sh start nodemanager

You can also start all yarn daemons together by

# start-yarn.sh

Time to check if Daemons have started

Enter the command

# jps
2539 NameNode
2744 NodeManager
3075 Jps
3030 DataNode
2691 ResourceManager

Time to launch UI

Open the localhost:8088 to see the Resource Manager page

Done :)

Happy Hadooping :)

번호 제목 날짜 조회 수
721 checking for termcap functions library... configure: error: No curses/termcap library found 2013.03.08 4189
720 다수의 로그 에이전트로 부터 로그를 받아 각각의 파일로 저장하는 방법(interceptor및 multiplexing) 2014.04.04 4158
719 Last transaction was partial에 따른 Unable to load database on disk오류 발생시 조치사항 2018.08.03 4099
718 Caused by: java.sql.SQLNonTransientConnectionException: Could not read resultset: unexpected end of stream, read 0 bytes from 4 오류시 확인/조치할 내용 2016.10.31 4064
717 Hadoop Cluster 설치 (Hadoop+Zookeeper+Hbase) file 2013.03.07 4063
716 원보드pc인 bananapi를 이용하여 hadoop 클러스터 구성하기(준비물) file 2014.05.29 3934
715 hadoop 2.6.0 기동(에코시스템 포함)및 wordcount 어플리케이션을 이용한 테스트 2015.05.05 3846
714 HBase 설치하기 – Fully-distributed 2013.03.12 3795
713 HBASE Client API : 기본 기능 정리 file 2013.04.01 3697
712 hadoop및 ecosystem에서 사용되는 명령문 정리 2014.05.28 3661
711 banana pi(lubuntu)에서 한글 설정및 한글깨짐 문제 해결 2014.07.06 3384
710 빅데이터 분석을 위한 샘플 빅데이터 파일 다운로드 사이트 2014.04.28 3332
709 Hbase Shell 명령 정리 2013.04.01 3301
708 "java.net.NoRouteToHostException: 호스트로 갈 루트가 없음" 오류시 확인및 조치할 사항 2016.04.01 3223
707 의사분산모드에 hadoop설치및 ecosystem 환경 정리 2014.05.29 3218
706 sqoop 1.4.4 설치및 테스트 2014.04.21 3207
705 의사분산모드에서 presto설치하기 2014.03.31 3126
704 Hive 사용법 및 쿼리 샘플코드 2013.03.07 3083
703 ping 안될때.. networking restart 날려주면 잘됨.. 2014.05.09 3072
702 hue.axes_accessattempt테이블의 username컬럼에 NULL 혹은 space가 들어갈수도 있음. 2021.11.03 3051
위로