Machine Learning in the Enterprise Coursera Quiz Answers
Hello Friends In this article i am gone to share Machine Learning in the Enterprise Coursera Quiz Answers with you..
Enrol Link: Machine Learning in the Enterprise
Machine Learning in the Enterprise Coursera Quiz Answers
Understanding the ML Enterprise Workflow Quiz Answers
Question 1)
Which two activities are involved in ML development?
- Training formalization and training operationalization
- Version control and training operationalization
- Experimentation and version control
- Experimentation and training operationalization
Question 2)
Which process covers algorithm selection, model training, hyperparameter tuning, and model evaluation in the Experimentation and Prototyping activity?
- Data exploration
- Model prototyping
- Model validation
- Feature engineering
Question 3)
What is the correct process that data scientists use to develop the models on an experimentation platform?
- Problem definition > Data exploration > Data selection > Feature engineering > Model prototyping > Model validation
- Problem definition > Data selection > Data exploration > Feature engineering > Model prototyping > Model validation
- Problem definition > Data selection > Data exploration > Model prototyping > Feature engineering > Model validation
- Problem definition > Data selection > Data exploration > Model prototyping > Model validation > Feature engineering
Question 4)
If the model needs to be repeatedly retrained in the future, an automated training pipeline is also developed. Which task do we use for this?
- Training formalization
- Training implementation
- Training operationalization
- Experimentation & prototyping
Data in the Enterprise Quiz Answers
Question 1)
Which of the following is correct for Online serving?
- Online serving is for low-latency data retrieval of small batches of data for real-time processing.
- Online serving is for high-latency data retrieval of small batches of data for real-time processing.
- Online serving is for high throughput and serving large volumes of data for offline processing.
- Online serving is for low throughput and serving large volumes of data for offline processing.
Question 2)
What does the Aggregation Values contain in any feature?
- The min, median, and Std.dev values for each features
- The min, zeros, and Std.dev values for each features
- The min, median, and max values for each features
- The Count, median, and max values for each features
Question 3)
Which of the following is not a part of Google’s enterprise data management and governance tool?
- Feature Store
- Data Catalog
- Dataplex
- Analytics Catalog
Question 4)
Which of the following statements is not a feature of Analytics Hub?
- Analytics Hub efficiently and securely exchanges data analytics assets across organizations to address challenges of data reliability and cost.
- You can create and access a curated library of internal and external assets, including unique datasets like Google Trends, backed by the power of BigQuery.
- Analytics Hub requires batch data pipelines that extract data from databases, store it in flat files, and transmit them to the consumer where they are ingested into another database.
- There are three roles in Analytics Hub – A Data Publisher, Exchange Administrator, and a Data Subscriber.
Question 5)
Which Data processing option can be used for transforming large unstructured data in Google Cloud?
- Hadoop proc
- Dataflow
- Beam proc
- Apache prep
Science of Machine Learning and Custom Training Quiz Answers
Question 1)
Model complexity often refers to the number of features or terms included in a given predictive model. What happens when the complexity of the model increases?
- Model is more likely to overfit.
- Model will not figure out general relationships in the data.
- Model performance on a test set is going to be poor.
- All of the options are correct.
Question 2)
The learning rate is a hyperparameter that controls how much to change the model in response to the estimated error each time the model weights are updated. Choosing the learning rate is challenging. What can happen if the value is too small?
- Training may take a long time.
- If the learning rate value is too small, then the model will diverge.
- The model will train more quickly.
- Smaller learning rates require less training epochs
- given the smaller changes made to the weights each update.
Question 3)
The learning rate is a hyperparameter that controls how much to change the model in response to the estimated error each time the model weights are updated. Choosing the learning rate is challenging. What can happen if the value is too large?
- Training may take a long time.
- If the learning rate value is too large, then the model will converge.
- The model will not train..
- A large learning rate value may result in the model learning a sub-optimal set of weights too fast or an unstable training process.
Question 4)
Which of the following is true?
- Larger batch sizes require smaller learning rates.
- Smaller batch sizes require smaller learning rates.
- Larger batch sizes require larger learning rates.
- Smaller batch sizes require larger learning rates.
Question 5)
The learning rate is a configurable hyperparameter used in the training of neural networks that has a small positive value, often in the range between _______
- 1.0 and 3.0.
- 0.0 and 1.0.
- > 0.0 and < 1.00.
- < 0.0 and > 1.00.
Question 6)
What is “data parallelism” in distributed training?
- Run the same model & computation on every device, but train each of them using the same training samples.
- Run different models & computation on every device, but train each of them using only one training sample.
- Run the same model & computation on every device, but train each of them using different training samples.
- Run different models & computation on a single device, but train each of them using different training samples.
Vertex Vizier Hyperparameter Tuning Quiz Answers
Question 1)
Bayesian optimization takes into account past evaluations when choosing the hyperparameter set to evaluate next. By choosing its parameter combinations in an informed way, it enables itself to focus on those areas of the parameter space that it believes will bring the most promising validation scores. Therefore it _____________________.
- enables itself to focus on those areas of the parameter space that it believes will bring the most promising validation scores.
- requires less iterations to get to the optimal set of hyperparameter values.
- limits the number of times a model needs to be trained for validation.
- All of the options are correct.
Question 2)
Which of the following is a black-box optimization service?
- Manual Search
- Vertex Vizier
- AutoML
- Early stopping
Question 3)
Which of the following algorithms is useful, if you want to specify a quantity of trials that is greater than the number of points in the feasible space?
- Grid Search
- Bayesian Optimization
- Random Search
- Manual Search
Question 4)
Black box optimization algorithms find the best operating parameters for any system whose ______________?
- iterations to get to the optimal set of hyperparameter values are less.
- execution time is less.
- performance can be measured as a function of adjustable parameters.
- number of iterations is limited to train a model for validation.
Question 5)
Which of the following can make a huge difference in model quality?
- Increasing the learning rate.
- Setting hyperparameters to their optimal values for a given dataset.
- Decreasing the number of epochs.
- Increasing the training time.
Prediction and Model Monitoring Using Vertex AI Quiz Answers
Question 1
Which statements are correct for serving predictions using Pre-built containers?
- Vertex AI provides Docker container images that you run as pre-built containers for serving predictions.
- Pre-built containers provide HTTP prediction servers that you can use to serve prediction using minimal configurations.
- Pre-built containers are organized by Machine learning framework and framework version.
- All of the options are correct.
Question 2
Which statement is correct regarding the maximum size for a CSV file during batch prediction?
- The data source file must be no larger than 100 GB.
- Each data source file must not be larger than 10 GB. You can include multiple files, up to a maximum amount of 100 GB.
- The data source file must be no larger than 50 GB. You can not include multiple files.
- Each data source file must include multiple files, up to a maximum amount of 50 GB.
Question 3
What should be done if the source table is in a different project?
- You should provide the BigQuery Data Editor role to the Vertex AI service account in that project.
- You should provide the BigQuery Data Viewer role to the Vertex AI service account in that project.
- You should provide the BigQuery Data Editor role to the Vertex AI service account in your project.
- You should provide the BigQuery Data Viewer role to the Vertex AI service account in your project.
Question 4
Which of the following statements is invalid for a data source file in batch prediction?
- The first line of the data source CSV file must contain the name of the columns.
- If the Cloud Storage bucket is in a different project than where you use Vertex AI, you must provide the Storage Object Creator role to the Vertex AI service account in that project.
- BigQuery data source tables must be no larger than 100 GB.
- You must use a regional BigQuery dataset.
Question 5
What are the features of Vertex AI model monitoring?
- Drift in data quality
- Skew in training vs. serving data
- Feature Attribution and UI visualizations
- All of the options are correct.
Question 6
For which, the baseline is the statistical distribution of the feature’s values seen in production in the recent past.
- Categorical features
- Numerical features
- Drift detection
- Skew detection
Vertex AI Pipelines Quiz Answers
Question 1)
Which package is used to define and interact with pipelines and components?
- kfp.components
- kfp.dsl package
- kfp.compiler
- kfp.containers
Question 2
How can you define the pipeline’s workflow as a graph?
- By using different inputs for each component.
- Use the previous pipeline’s output as an input for the current pipeline.
- By using the outputs of a component as an input of another component
- By using predictive input for each component.
Question 3
What can you use to compile the pipeline?
- compiler.Compiler
- kfp.v2.compiler
- kfp.Compiler
- kfp.v2.compiler.Compiler
Question 4
What can you use to create a pipeline run on Vertex AI Pipelines?
- Service account
- Pipeline root path
- kfp.v2.compiler.Compiler
- Vertex AI python client