Data Integration with Microsoft Azure Data Factory Week 7 | Test prep Quiz Answers
In this article i am gone to share Coursera Course: Data Integration with Microsoft Azure Data Factory Week 7 | Test prep Quiz Answers with you..
Enrol Link: Week 6 | Test prep Quiz Answers
Data Integration with Microsoft Azure Data Factory
Week 7 | Test prep Quiz Answers
Test prep Quiz Answers
You want to add a data flow activity to a pipeline, which of the following options would you choose?
- Move and Transform
- Azure Data Explorer
- Batch Service
In Azure Data factory when you create multiple pipelines it is important to keep track of the pipelines. To do so, there is a Monitor tab in Azure Data Factory, which by default will store all the run information. How many days is the run information stored?
- 10 Days
- 30 Days
- 45 Days
- 365 Days
Which transformation can be used to load data into a data store or compute resource?
When a connector in Azure Data Factory is not supported in a mapping data flow task to transform data from one of these sources what must be done?
- Use an aggregate transformation in the Dataflow activity.
- Ingest the data into a supported source using the copy activity.
- Use a group by activity in the Dataflow activity.
In a Mapping Data Flow after you finish transforming your data, you write it into a destination store by using the sink transformation when using sink transformations which of the following statements is True?
- Every data flow requires at least one sink transformation, and you can write to multiple sinks as necessary.
- Every data flow requires at least one sink transformation, but cannot write to multiple sinks.
As a data provider, you can share datasets with third parties or between departments within your organization. To enable your data consumers the ability to automatically get refreshed data. What feature should you enable?
- Data Ingestion
In Azure Data factory after you have finished joining and transforming a dataset through a mapping data flow, it is important to write the newly created dataset into a destination store. Which of the following options would you use to do this?