20:00

Free Test
/ 10

Quiz

1/10
Scenario: Jack is the administrator of project prj1. The project involves a large volume of sensitive data such as bank account, medical record, etc. Jack wants to properly protect the data. Which of the follow statements is necessary?
Select the answer
1 correct answer
A.
set ProjectACL=true;
B.
add accountprovider ram;
C.
set ProjectProtection=true;
D.
use prj1;

Quiz

2/10
Where is the meta data (e.g.,table schemas) in Hive?
Select the answer
1 correct answer
A.
Stored as metadata on the NameNode
B.
Stored along with the data in HDFS
C.
Stored in the RDBMS like MySQL
D.
Stored in ZooKeeper

Quiz

3/10
MaxCompute tasks contain computational tasks and non-computational tasks. The computational tasks require actual operations on data stored in the table. MaxCompute parses the task to obtain its execution plan, and submits the task for execution. The noncomputational tasks require substantial reading of and modification to metadata information. Therefore, the task is not parsed, and no execution plan is provided. The task is directly submitted for execution. The latter one has a faster response speed than the former one. Which of the following operations on the table t_test is a computational task?
Select the answer
1 correct answer
A.
desc t_test
B.
alter table t_test add columns (comments string);
C.
select count(*) from t_test;
D.
truncate table t_test;

Quiz

4/10
When we use the MaxCompute tunnel command to upload the log.txt file to the t_log table, the t_log is a partition table and the partitioning column is (p1 string, p2 string). Which of the following commands is correct?
Select the answer
1 correct answer
A.
tunnel upload log.txt t_log/p1="b1”, p2="b2"
B.
tunnel upload log.txt t_log/(p1="b1”, p2="b2")
C.
tunnel upload log.txt t_log/p1="b1"/p2="b2"

Quiz

5/10
A Log table named log in MaxCompute is a partition table, and the partition key is dt. Anew partition is created daily to store the new data of that day. Now we have one month's data, starting from dt='20180101' to dt='20180131', and we may use to delete the data on 20180101.
Select the answer
1 correct answer
A.
delete from log where dt='20180101'
B.
truncate table where dt='20180101'
C.
drop partition log (dt='20180101')
D.
alter table log drop partition(dt='20180101')

Quiz

6/10
DataV is a powerful yet accessible data visualization tool, which features geographic information systems allowing for rapid interpretation of data to understand relationships, patterns, and trends. When a DataV screen is ready, it can embed works to the existing portal of the enterprise through .
Select the answer
1 correct answer
A.
URL after the release
B.
URL in the preview
C.
MD5 code obtained after the release
D.
Jar package imported after the release

Quiz

7/10
By integrating live dashboards, DataV can present and monitor business data simultaneously. This data-driven approach enables for well-organized data mining and analysis allowing the user to seize new opportunities that otherwise might remain hidden. It can support wide range of databases and data formats. Which of the following options DataV does not support?
Select the answer
1 correct answer
A.
Alibaba Cloud' s AnalyticDB, ApsaraDB
B.
Static data in CSV and JSON formats
C.
Oracle Database
D.
MaxCompute Project

Quiz

8/10
You want to understand more about how users browse your public website. For example, you want to know which pages they visit prior to placing an order. You have a server farm of 100 web servers hosting your website. Which is the most efficient process to gather these web servers across logs into traditional Hadoop ecosystem. Ingest the server web logs into HDFS using Apache
Select the answer
1 correct answer
A.
Just copy them into HDFS using curl
B.
Flume
C.
Channel these clickstreams into Hadoop using Hadoop Streaming
D.
Import all user clicks from your OLTP databases into Hadoop using Sqoop

Quiz

9/10
Your company stores user profile records in an OLTP databases. You want to join the serecords with web server logs you have already ingested into the Hadoop file system. What is the best way to obtain and ingest these user records?
Select the answer
1 correct answer
A.
Ingest with Hadoop streaming
B.
Ingest using Hive
C.
Ingest with sqoop import
D.
Ingest with Pig's LOAD command

Quiz

10/10
You are working on a project where you need to chain together MapReduce, Hive jobs. You also need the ability to use forks, decision points, and path joins. Which ecosystem project should you use to perform these actions?
Select the answer
1 correct answer
A.
Apache HUE
B.
Apache Zookeeper
C.
Apache Oozie
D.
Apache Spark
Looking for more questions?Buy now

Alibaba-ACA-BigData1 Practice test unlocks all online simulator questions

Thank you for choosing the free version of the Alibaba-ACA-BigData1 practice test! Further deepen your knowledge on Alibaba Simulator; by unlocking the full version of our Alibaba-ACA-BigData1 Simulator you will be able to take tests with over 77 constantly updated questions and easily pass your exam. 98% of people pass the exam in the first attempt after preparing with our 77 questions.

BUY NOW

What to expect from our Alibaba-ACA-BigData1 practice tests and how to prepare for any exam?

The Alibaba-ACA-BigData1 Simulator Practice Tests are part of the Alibaba Database and are the best way to prepare for any Alibaba-ACA-BigData1 exam. The Alibaba-ACA-BigData1 practice tests consist of 77 questions and are written by experts to help you and prepare you to pass the exam on the first attempt. The Alibaba-ACA-BigData1 database includes questions from previous and other exams, which means you will be able to practice simulating past and future questions. Preparation with Alibaba-ACA-BigData1 Simulator will also give you an idea of the time it will take to complete each section of the Alibaba-ACA-BigData1 practice test . It is important to note that the Alibaba-ACA-BigData1 Simulator does not replace the classic Alibaba-ACA-BigData1 study guides; however, the Simulator provides valuable insights into what to expect and how much work needs to be done to prepare for the Alibaba-ACA-BigData1 exam.

BUY NOW

Alibaba-ACA-BigData1 Practice test therefore represents an excellent tool to prepare for the actual exam together with our Alibaba practice test . Our Alibaba-ACA-BigData1 Simulator will help you assess your level of preparation and understand your strengths and weaknesses. Below you can read all the quizzes you will find in our Alibaba-ACA-BigData1 Simulator and how our unique Alibaba-ACA-BigData1 Database made up of real questions:

Info quiz:

  • Quiz name:Alibaba-ACA-BigData1
  • Total number of questions:77
  • Number of questions for the test:50
  • Pass score:80%

You can prepare for the Alibaba-ACA-BigData1 exams with our mobile app. It is very easy to use and even works offline in case of network failure, with all the functions you need to study and practice with our Alibaba-ACA-BigData1 Simulator.

Use our Mobile App, available for both Android and iOS devices, with our Alibaba-ACA-BigData1 Simulator . You can use it anywhere and always remember that our mobile app is free and available on all stores.

Our Mobile App contains all Alibaba-ACA-BigData1 practice tests which consist of 77 questions and also provide study material to pass the final Alibaba-ACA-BigData1 exam with guaranteed success. Our Alibaba-ACA-BigData1 database contain hundreds of questions and Alibaba Tests related to Alibaba-ACA-BigData1 Exam. This way you can practice anywhere you want, even offline without the internet.

BUY NOW