메뉴 건너뛰기

Cloudera, BigData, Semantic IoT, Hadoop, NoSQL

Cloudera CDH/CDP 및 Hadoop EcoSystem, Semantic IoT등의 개발/운영 기술을 정리합니다. gooper@gooper.com로 문의 주세요.


*출처1 : https://www.cloudera.com/more/training/certification/cca-spark.html




*출처2 : http://www.hadoopexam.com/Cloudera_Certification/CCA175/CCA175_Cloudera_Hadoop_and_Spark_Developer_Tips_and_Tricks.pdf

1. Preparation: I have gone through all the CCA175 Questions and practice the code provided by
http://www.HadoopExam.com Thanks for your questions and code content. The content was
excellent and it helped me a lot. (Especially I have gone through all the Spark Professional
training module as well)
2. No. Of Questions: Generally you will get 10 questions in real exam: Topic will be coverings are
Sqoop, Hive, Pyspark and Scala and avro-tools to extract schema (All questions are covered in
CCA175 Certification Simulator).
3. Code Snippets: will be provided for Pyspark and Scala. You have to edit the snippets accordingly
as per the problem statement.
4. Real Exam Environment: Gateway node will be accessible for execution of the problems during
the exam. Keep in mind there will not be any on-screen timer available during the exam. You
have to keep asking for the time left. There are three sections for each problem i.e.
· Instructions
· Data Set
· Output Requirements.
Please go through all the three sections carefully before start developing the code.
Note: If you started developing code right after looking at the Instruction part of the question,
then later you will realized the exact details of the table like name of the table and HDFS
directory are also mentioned. This can waste your time if have to redo the code or might as well
cost you a question.
5. Editor: nano, gedit are not available. So if you have to edit any code snippets, you have to use vi
alone. Please make yourself familiar with vi editor if you are not.
6. Fill in blanks: You dont have to write entire code for Python and Scala for Apache Spark,
generally they will ask you to do fill in the blanks.
7. Flume: Very few questions on flume.
8. Difficulty Level: If you have enough knowledge, you will feel exam is quite easy. The questions
were logically easy and can be answered in the first attempt if you read the question carefully
(all three sections).
9. Common mistake in Sqoop: People use connector as localhost which is wrong, you have to use
full name instead of localhost (Avoid wasting your time). Use given hostname
10. Hive: Have initial knowledge of hive as well.
11. Spark: Using basic transform functions to get desired output. For instance filter according
particular scenario, sorting and ranking etc.
12. Avro-tool : avro-tool to get schema of avro file. (Very  nicely covered in CCA175
HadoopExam.com Simulator)
13. Big Mistake: Avoid accidently deleting your data: good practice is necessary to avoid such
mistakes. (Once you delete or drop hive table, you have to create it entirely once again.) Same is
instructed by www.HadoopExam.com during their videos  session provided at
http://cca175cloudera.training4exam.com/ (Please go through sample sessions)
14. Spark-sql: They will not ask questions based on Spark Sql learn importantly aggregate, reduce,
sort.
15. Time management: It is very important, (That’s the reason you need too much practice, use
CCA175 simulator to practice all the questions at least a week or two before your real exam).
16. Data sets in real exam is quite larger, hence it will take 2 to 5 mins for execution.

17. Attempts: try to attempt all questions at least 9/10, hence you must be able to score 70%.
18. File format: In most of questions there was tab delimited file to process.
19. Python or Scala: You will get a preloaded python or scala file to work with, so you don't have a
choice whether you want to attempt a question via scala or pyspark. (I have gone through all the
Video sessions provided by www.HadoopExam.com here
20. Connection Issue: If you got disconnected during exam, you may need to contact the proctor
immediately. If he/she is not available log back into examslocal.com and use their online help.
21. Shell scripts: Have good experience to use shell scripts.
22. Question types as mentioned in syllabus : Questions were from Sqoop(import and export),
Hive(table creation and dynamic partitioning), Pyspark and Scala(Joining, sorting and filtering
data), avro-tools. Snippets of code will be provided for Pyspark and Scala. You have to edit the
snippets accordingly as per the problem statement and can the script file(which is another file
apart from snippet) to get the results.
23. Overall exam is easy, but require lot of practice to complete on time and for accurate
solutions of the problem. Hence go through the all below material for CCA175 (It will not take
more than a month, if you are new and already know the Spark and Hadoop then 2-3 weeks
are good enough.
· CCA175 : Hadoop and Spark Developer Certification practice questions
· Hadoop professional training
· Spark professional training.

Wish you all the best

번호 제목 날짜 조회 수
381 fuseki가 제공하는 web ui를 통해서 dataset를 remove->create할 경우 동일한 동일한 이름으로 지정했을때 fuseki-server.jar가 뜨지 않는 현상 2017.02.03 472
380 HiveServer2인증을 PAM을 이용하도록 설정하는 방법 2018.07.21 471
379 beeline으로 접근시 "User: gooper is not allowed to impersonate anonymous (state=08S01,code=0)"가 발생하면서 "No current connection"이 발생하는 경우 조치 2018.04.15 469
378 Scala버젼 변경 혹은 상황에 맞게 Spark소스 컴파일하기 2016.05.31 469
377 RHEL 7.4에 zeppelin 0.7.4 설치 2018.07.31 466
376 impala2를 Cloudera Manager가 아닌 수동으로 설치하는 방법 2018.05.30 466
375 Scala를 이용한 Streaming예제 2018.03.08 466
374 2개 data를 join하고 마지막으로 code정보를 join하여 결과를 얻는 mr 프로그램 2014.06.30 466
373 python실행시 ValueError: zero length field name in format오류 해결방법 2016.05.27 465
372 Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.http.HttpConfig.getSchemePrefix()Ljava/lang/String; 해결->실패 2015.06.14 462
371 Class.forName을 이용한 메서드 호출 샘플소스 2016.12.21 459
370 Apache Toree설치(Jupyter에서 Scala, PySpark, SparkR, SQL을 사용할 수 있도록 하는 Kernel) 2018.04.17 457
369 linux에서 특정 포트를 사용하는 프로세스 확인하기 2017.04.26 457
368 impala테이블 쿼리시 max_row_size 관련 오류가 발생할때 조치사항 2020.02.12 455
367 DB별 JDBC 드라이버 2015.10.02 454
366 Ubuntu 16.04 LTS에서 사이트에 무료인증서를 이용하여 SSL적용 file 2017.05.23 453
365 spark notebook 0.7.0설치및 설정 2016.11.14 453
364 특정문자열이나 URI를 임의로 select 절에 지정하여 사용할때 사용하는 sparql 문장 2016.08.25 453
363 producer / consumer구현시 설정 옵션 설명 2016.10.19 452
362 Hadoop의 Datanode를 Decommission하고 나서 HBase의 regionservers파일에 해당 노드명을 지웠는데 여전히 "Dead regionser"로 표시되는 경우 처리 2018.01.25 451
위로