Coursera Answers

Build and Operate Machine Learning Solutions with Azure Coursera Answers

Hello Friends in this article i am gone to share Coursera Course: Build and Operate Machine Learning Solutions with Azure Quiz Answers with you..

Enroll Link: Build and Operate Machine Learning Solutions with Azure


Build and Operate Machine Learning Solutions with Azure Coursera Answers

 

WEEK 1 QUIZ ANSWERS

Knowledge check

Question 1)
Which are some of the assets included in a Machine Learning Workspace? Select all that apply.

  • Key vault
  • Data
  • Pipelines
  • Compute targets
  • Models
  • Notebooks

Question 2)
What other Azure resources can be created alongside a ML workspace?
Select all that apply.

  • Virtual network
  • Key vault instance
  • Application Insights instance
  • Storage account
  • App Service Plan
  • Container registry

Question 3)
True or False?
Azure Machine Learning Workspaces also support RBAC.

  • True
  • False

Question 4)
Which of the following Azure Machine Learning Studio tools is used as a drag and drop interface for machine learning model development without the need to code?

  • Designer
  • Automated Machine learning
  • Notebooks

Question 5)
For which development languages does Azure Machine Learning provide software development kits (SDK)? Choose all that apply.

  • R
  • Java
  • Ruby
  • Python

 

Knowledge check

Question 1)
When using a script-based experiment to train a model, what is the purpose of the following commands in the script?
diabetes = pd.read_csv(‘data.csv’)
X, y = diabetes[[‘Feature1′,’Feature2′,’Feature3’]].values, diabetes[‘Label’].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)

  • Training a logistic regression model
  • Save the trained model
  • Calculate accuracy
  • Prepare the dataset

Question 2)
When using a script-based experiment to train a model, what is the purpose of the following commands in the script?
os.makedirs(‘outputs’, exist_ok=True)
joblib.dump(value=model, filename=’outputs/model.pkl’)

  • Train a logistic regression model
  • Prepare the dataset
  • Save the trained model
  • Calculate accuracy

Question 3)
In order to run the script as an experiment, you have to create a ScriptRunConfig file.
What is the purpose of the following commands?
packages = CondaDependencies.create(conda_packages=[‘scikit-learn’,’pip’],
pip_packages=[‘azureml-defaults’])
sklearn_env.python.conda_dependencies = packages

  • Create a Python environment for the experiment
  • Ensure the required packages are installed
  • Submit the experiment
  • Create a script config

Question 4)
Which of the below libraries is used to read the arguments passed to the script?

  • numpy
  • joblib
  • pandas
  • argparse

Question 5)
What happens when you register a model with the same name as an existing model?

  • It creates a new version of the model.
  • It prompts for confirmation and overwrites the previous model if accepted.
  • You cannot save a new model with the name of an existing model.

 

Test prep Quiz Answer

Question 1)
You installed the Azure Machine Learning Python SDK and you want to use it to create a workspace named “aml-workspace” on your subscription.
How do you code this in Python?

from azureml.core import Workspace
ws = Workspace.create(name=’aml-workspace’,
subscription_id=’123456-abc-123…’,
resource_group=’aml-resources’,
create_resource_group=True,
location=’eastus’
)

Question 2)
You installed the Azure Machine Learning CLI extension and you want to use it to create an ML workspace in an existing resource group.
Which Azure CLI command does this?

  • az ml ws create -w ‘aml-workspace’ -g ‘aml-resources’
  • new az ml workspace create -w ‘aml-workspace’ -g ‘aml-resources’
  • az ml workspace create -w ‘aml-workspace’ -g ‘aml-resources’
  • az ml new workspace create -w ‘aml-workspace’ -g ‘aml-resources’

Question 3)
Which of the following package managers and CLI commands can you use to install the Azure Machine Learning SDK for Python?

  • pip install azureml-sdk
  • yarn install azureml-sdk
  • nuget azureml-sdk
  • npm install azureml-sdk

Question 4)
If you want to connect to your Azure ML workspace using a configuration file, which Python command can you use?

  • from azureml.core import Workspace
    ws = Workspace.from_config()

 

Question 5)
After an experiment run has been completed, what run object method can you use to list the generated files?

  • list_file_names
  • get_file_names
  • download_file
  • download_files

Question 6)
After you run an experiment to train a model, you want to store the model in the Azure ML workspace, so it can be available to other experiments and services.
How can you do this?

  • Save the experiment script as a notebook
  • Save the model as a file in a compute instance
  • Save the model as a file in a Key Vault instance
  • Register the model in the workspace

Question 7)
If you want to view the models you registered in Azure ML studio using the Model object, which Python command can you use?

  • from azureml.core import Model
    for model in Model.list(ws):
    print(model.name, ‘version:’, model.version)

 

WEEK 2 QUIZ ANSWERS

Knowledge check

Question 1)
When planning for datastores, which datafiles format should perform better?

  • Parquet
  • XML
  • CVS
  • XLS

Question 2)
True or False?
You cannot access datastores by name.

  • True
  • False

Question 3)
If you want to change the default datastore, what method should you use?

  • new_default_datastore()
  • modify_default_datastore()
  • change_default_datastore()
  • set_default_datastore()

Question 4)
What types of datasets can be created in Azure Machine Learning? Select all that apply.

  • Tabular
  • Media
  • File
  • Notebook

Question 5)
To create a tabular dataset using the SDK, which method of the Dataset.Tabular class should you use?

  • from_tabular_files
  • from_tabular_dataset
  • from_delimited_files
  • from_files_tabular

 

Knowledge check

Question 1)
Which package managers are usually used in the installation of a Python virtual environment? Select all that apply.

  • pip
  • pandas
  • numpy
  • conda

Question 2)
You saved a specification file named conda.yml and you want to use it to create an Azure ML environment.
Which SDK command does the job?

  • from azureml.core import Environment
    env = Environment.from_conda_specification(name=’training_environment’,
    file_path=’./conda.yml’)

Question 3)
You want to create an Azure ML environment by specifying the packages you need.

Which SDK commands does the job?

  • from azureml.core import Environment
    from azureml.core.conda_dependencies import CondaDependencies
    env = Environment(‘training_environment’)
    deps = CondaDependencies.create(conda_packages=[‘scikit-learn’,’pandas’,’numpy’],
    pip_packages=[‘azureml-defaults’])
    env.python.conda_dependencies = deps

 

Question 4)
If you are running a notebook experiment on an Azure Machine Learning compute instance, what type of compute are you using?

  • Local compute
  • Attached compute
  • Compute clusters

Question 5)
If you have an Azure Databricks cluster that you want to use for experiment running and model training, which type of compute target is this?

  • Managed
  • Unmanaged

 

Test prep Quiz Answers

Question 1)
Which Python commands should you use to create and register a tabular dataset using the from_delimited_files method of the Dataset.Tabular class?

  • from azureml.core import Dataset
    blob_ds = ws.get_default_datastore()
    csv_paths = [(blob_ds, ‘data/files/current_data.csv’),
    (blob_ds, ‘data/files/archive/*.csv’)]
    tab_ds = Dataset.Tabular.from_delimited_files(path=csv_paths)
    tab_ds = tab_ds.register(workspace=ws, name=’csv_table’)

 

Question 2)
You’re creating a file dataset using the from_files method of the Dataset.File class.
You also want to register it in the workspace with the name img_files.
Which SDK commands can you use?

  • from azureml.core import Dataset
    blob_ds = ws.get_default_datastore()
    file_ds = Dataset.File.from_files(path=(blob_ds, ‘data/files/images/*.jpg’))
    file_ds = file_ds.register(workspace=ws, name=’img_files’)

 

Question 3
What methods can you use from the Dataset class to retrieve a dataset after registering it? Select all that apply.

  • find_by_id
  • get_by_id
  • find_by_name
  • get_by_name

Question 4
To retrieve a specific version of a data set, which SDK commands should you use?

  • img_ds = Dataset.get_by_name(workspace=ws, name=’img_files’, version=’2’)
  • img_ds = Dataset.get_by_name(workspace=ws, name=’img_files’, version_2)
  • img_ds = Dataset.get_by_name(workspace=ws, name=’img_files’, version=2)
  • img_ds = Dataset.get_by_name(workspace=ws, name=’img_files’, version(2))

Question 5)
Which SDK commands can you use to view the registered environments in your workspace?

  • from azureml.core import Environment
    env_names = Environment.list(workspace=ws)
    for env_name in env_names:
    print(‘Name:’,env_name)

Question 6)
You are defining a compute configuration for a managed compute target using the SDK.
Which of the below commands is correct?

  • compute_config = AmlCompute.provisioning_configuration(vm_size=’STANDARD_DS11_V2′,
    min_nodes=0, max_nodes=4,
    vm_priority=’dedicated’)

 

Question 7)
You created a compute target and now you want to use it for an experiment. You want to specify the compute target using a ComputeTarget object.
Which of the SDK commands below can you use?

  • compute_name = “aml-cluster”
    training_cluster = ComputeTarget(workspace=ws,
    name=compute_name)
    training_env = Environment.get(workspace=ws, name=’training_environment’)
    script_config = ScriptRunConfig(source_directory=’my_dir’,
    script=’script.py’,
    environment=training_env,
    compute_target=training_cluster)

 


WEEK 3 QUIZ ANSWERS

Knowledge check

Question 1)
True or False?
Pipelines in Azure Machine Learning must not be confused with Azure DevOps pipelines as they are not the same and they do not work together.

  • True
  • False

Question 2)
What is the term used for each task in the workflow of an Azure Machine learning pipeline?

  • Runs
  • Steps
  • Jobs
  • Functions

Question 3)
In what manner can the steps of a pipeline use compute targets?
Select all that apply.

  • Each step must be assigned a single compute target
  • All steps will run on the same compute targets
  • Each step can run on a specific compute target.

Question 4)
What should you use to pass data between steps in an Azure ML pipeline?

  • Objects
  • Variables
  • Methods
  • Parameters

Question 5)
Which parameter should you use in the step configuration if you need to reuse the step?

  • allow_reuse = True
  • enable_reuse = False
  • allow_reuse = False
  • enable_reuse = True

Question 6)
Which object should you use if you want to define parameters for a pipeline?

  • AllowParameter
  • ImportParameter
  • PipelineParameter
  • ParameterPipeline

 

Knowledge check

Question 1)
What is inferencing in Azure Machine Learning?

  • Using a trained model to predict labels for old data
  • Training a model to predict labels
  • Using a trained model to predict labels for new, untrained data

Question 2)
What kind of compute targets can you use to deploy a model as a real-time web service?
Select all that apply.

  • Local compute.
  • Azure Function
  • Azure Kubernetes Service (AKS) cluster
  • Azure Container Instance (ACI)
  • Azure App Service Plan
  • Azure Machine Learning compute instance
  • Internet of Things (IoT) module

Question 3)
Which two functions must be included in the entry script of a model? Select all that apply.

  • init()
  • run()
  • dispatcher()
  • run(raw_data)

Question 4)
True or False?
If you’re using an AKS cluster as a compute target, you can define the cluster in the deployment configuration.

  • True
  • False

Question 5)
In a production environment, you must restrict access to your services by applying authentication.
Which types of authentication can you use? Select all that apply.

  • Token
  • Just-in-time (JIT)
  • Key
  • Shared access signatures (SAS)

 

Test prep Quiz Answers

Question 1)
You defined three steps named step1, step2, and step3 for a pipeline you want to create.
You now want to assign those steps to the pipeline and run it as an experiment.
Which of the SDK commands below can you use?

  • train_pipeline = Pipeline(workspace = ws, steps = [step1,step2,step3])
    experiment = Experiment(workspace = ws, name = ‘training-pipeline’)
    pipeline_run = experiment.submit(train_pipeline)

Question 2)
To publish a pipeline you created, which SDK commands should you use?

published_pipeline = pipeline.publish(name=’training_pipeline’,
description=’Model training pipeline’,
version=’1.0′)

Question 3)
True or False?
The parameters for a pipeline have to be defined before publishing it.

  • True
  • False

Question 4)
After publishing a parametrized pipeline, you can pass parameter values in the JSON payload for the REST interface.
Which SDK commands can you use for this?

  • response = requests.post(rest_endpoint,
    headers=auth_header,
    json={“ExperimentName”: “run_training_pipeline”,
    “ParameterAssignments”: {“reg_rate”: 0.1}})

 

Question 5)
You want to create a schedule for your pipeline. What object can you define for this task?

  • ScheduleTimer
  • ScheduleRecurrence
  • ScheduleConfig
  • ScheduleSync

Question 6)
You want to configure an AKS cluster as a compute target for your service to be deployed on.
Which SDK commands do the job?

  • from azureml.core.compute import ComputeTarget, AksCompute
    cluster_name = ‘aks-cluster’
    compute_config = AksCompute.provisioning_configuration(location=’eastus’)
    production_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
    production_cluster.wait_for_completion(show_output=True)

Question 7)
What are the default authentication methods for ACI services and AKS services? Select all that apply.

  • Token-based for AKS services.
  • Disabled for ACI services
  • Disabled for AKS services
  • Token-based for ACI services
  • Key-based for AKS services

 


WEEK 4 QUIZ ANSWERS

Knowledge check

Question 1)
What is the terminology used for long-running tasks that operate on large volumes of data?

  • Stack operations
  • Cluster operations
  • Bunch operations
  • Batch operations

Question 2)
When creating a batch inferencing pipeline, which of the tasks below should be performed first?

  • Create a pipeline with a ParallelRunStep
  • Register a model
  • Create a scoring script
  • Run the pipeline and retrieve the step output

Question 3)
Which are the two functions included in the scoring script of the batch inference pipeline? Select all that apply.

  • run(mini_batch)
  • init()
  • run(batch)
  • init(batch)

Question 4)
What is the type of ParallelRunStep that must be used in the pipeline for parallel batch inferencing? Select all that apply.

  • Parameter
  • Method
  • Class
  • Object

Question 5)
After you run your pipeline, in which file can you observe the results?

  • OutputFileDatasetConfig
  • parallel_run_step.txt
  • parallel_run_config

 

Knowledge check

Question 1)
What are hyperparameters?

  • Values that are passed into a function
  • Values used to configure training behavior which are not derived from the training data
  • Values determined from the training features

Question 2)
The process of hyperparameter tuning consists of?

  • Training multiple models, using the same algorithm and training data but different hyperparameter values.
  • Training multiple models, using the same algorithm but different training data and different hyperparameter values.
  • Training multiple models, using the same algorithm, training data, and hyperparameter values.
  • Training multiple models, using different algorithms, same training data and different hyperparameter values.

Question 3)
Which of the following are valid discrete distributions from which you can select discrete values for discrete hyperparameters? Select all that apply.

  • qlognormal
  • qnormal
  • quniform
  • qlogbasic
  • qloguniform
  • qbasic

Question 4)
Which of the following are valid types of sampling used in hyperparameter tuning? Select all that apply.

  • Byzantine sampling
  • Bayesian sampling
  • Grid sampling
  • Random sampling

Question 5)
Which of the following are valid types of early termination policies you can implement? Select all that apply.

  • Median stopping policy
  • Waiting policy
  • Bandit policy
  • Truncation selection policy

 

Test prep Quiz Answers

Question 1)
To register a model using a reference to the Run used to train the model, which SDK commands can you use?

  • from azureml.core import Model
    run.register_model( model_name=’classification_model’,
    model_path=’outputs/model.pkl’,
    description=’A classification model’)

Question 2)
Which of the following SDK commands can you use to create a parallel run step?

  • parallelrun_step = ParallelRunStep(
    name=’batch-score’,
    parallel_run_config=parallel_run_config,
    inputs=[batch_data_set.as_named_input(‘batch_data’)],
    output=output_dir,
    arguments=[],
    allow_reuse=True

Question 3)
After the run of the pipeline has completed, which code can you use to retrieve the parallel_run_step.txt file from the output of the step?

  • for root, dirs, files in os.walk(‘results’):
    for file in files:
    if file.endswith(‘parallel_run_step.txt’):
    result_file = os.path.join(root,file)

Question 4)
You want to define a search space for hyperparameter tuning. The batch_size hyperparameter can have the value 128, 256, or 512 and the learning_rate hyperparameter can have values from a normal distribution with a mean of 10 and a standard deviation of 3.
How can you code this in Python?

  • from azureml.train.hyperdrive import choice, normal
    param_space = {
    ‘–batch_size’: choice(128, 256, 512),
    ‘–learning_rate’: normal(10, 3)
    }

 

Question 5)
How does random sampling select values for hyperparameters?

  • It tries every possible combination of parameters in the search space
  • It tries to select parameter combinations that will result in improved performance from the previous selection
  • From a mix of discrete and continuous values

Question 6)
True or False?
Bayesian sampling can be used only with choice, uniform and quniform parameter expressions, and it can be combined with an early-termination policy.

  • True
  • False

Question 7)
You want to implement a median stopping policy. How can you code this in Python?

  • from azureml.train.hyperdrive import MedianStoppingPolicy
    early_termination_policy = MedianStoppingPolicy(evaluation_interval=1,
    delay_evaluation=5)

 


WEEK 5 QUIZ ANSWERS

Knowledge check

Question 1)
Which type of machine learning tasks support automated machine learning in model training? Select all that apply.

  • Time Series Forecasting
  • Regression
  • Classification
  • Clustering

Question 2)
Which of the following are classification algorithms that include support for Azure Machine Learning? Select all that apply.

  • Linear Regression
  • Decision Tree
  • Deep Neural Network (DNN) Classifier
  • Logistic Regression

Question 3)
Which of the following are forecasting algorithms that include support for Azure Machine Learning? Select all that apply.

  • Naive Bayes
  • Elastic Net
  • Light Gradient Boosting Machine (GBM)
  • Linear Support Vector Machine (SVM)

Question 4)
True or False?
Automated machine learning can apply preprocessing transformations to your data with the purpose of improving the performance of the model.

  • True
  • False

Question 5)
Which is one of the most important settings you must specify in relation to Automated ML?

  • Second validation dataset or dataframe
  • The primary metric
  • A numpy array of X values containing the training features
  • Dataframe of training data

 

Knowledge check

Question 1)
What is the name of the parameter that configures the amount of variation caused by adding noise?

  • Sigma
  • Lambda
  • Psi
  • Epsilon

Question 2)
True or False?
The Epsilon parameter can apply the privacy principle to a specific group of people or everyone participating in a study.

  • True
  • False

Question 3)
What is the ratio of the Epsilon value in terms of privacy and accuracy? Select all that apply.

  • Low epsilon value equals less privacy and more accuracy
  • High epsilon value equals less privacy and more accuracy
  • High epsilon value equals more privacy and less accuracy
  • Low epsilon value equals more privacy and less accuracy

Question 4)
Which of the following statements is true in a differential privacy solution?

  • In a dataset, numeric values that are encrypted cannot be used
  • In a dataset, all columns that are numeric are converted to the mean value
  • During analysis, noise is added to the data so that aggregations are statistically consistent with the data distribution but non-deterministic

Question 5)
What should you do in a differential privacy solution to ensure that an individual’s data has a low impact on the aggregated results?

  • Set epsilon to 0.5
  • Set epsilon to a high value.
  • Set epsilon to a low value

 

Knowledge check

Question 1)
What types of explainers can you create from the azureml-interpret package? Select all that apply.

  • StaticExplainer
  • PFIExplainer
  • MimicExplainer
  • TabularExplainer
  • ShortExplainer

Question 2)
Which of the explainers below acts as a wrapper around various SHAP explainer algorithms?

  • MimicExplainer
  • TabularExplainer
  • PFIExplainer

Question 3)
What method should you call to retrieve global importance values for the features of your model?

  • get_feature_importance_dict()
  • explain_global()
  • get_feature_importance.dict()
  • explainglobal()

Question 4)
What methods can you call to retrieve feature names and importance values for local feature importance? Choose all that apply.

  • get_feature_importance_dict()
  • get_ranked_local_names()
  • get_feature_values()
  • get_ranked_local_values()

Question 5)
True or False?
The PFIExplainer supports local and global feature importance explanations.

  • True
  • False

 

Test prep Quiz Answers

Question 1)
You need to retrieve the primary metric for a regression task. How can you code this in Python?

  • from azureml.train.automl.utilities import get_primary_metrics
    get_primary_metrics(‘regression’)

Question 2)
You need to retrieve the best run and its model. How can you code that with the SDK?

  • best_run, fitted_model = automl_run.get_output()
    best_run_metrics = best_run.get_metrics()
    for metric_name in best_run_metrics:
    metric = best_run_metrics[metric_name]
    print(metric_name, metric)

Question 3)
How can you code an instance of a MimicExplainer for a model named loan_model?

  • from interpret.ext.blackbox import MimicExplainer
    from interpret.ext.glassbox import DecisionTreeExplainableModel
    mim_explainer = MimicExplainer(model=loan_model,
    initialization_examples=X_test,
    explainable_model = DecisionTreeExplainableModel,
    features=[‘loan_amount’,’income’,’age’,’marital_status’],
    classes=[‘reject’, ‘approve’])

Question 4)
How can you code an instance of a TabularExplainer for a model named loan_model?

  • from interpret.ext.blackbox import TabularExplainer
    tab_explainer = TabularExplainer(model=loan_model,
    initialization_examples=X_test,
    features=[‘loan_amount’,’income’,’age’,’marital_status’],
    classes=[‘reject’, ‘approve’])

Question 5)
How can you code a PFIExplainer for a model named loan_model?

  • from interpret.ext.blackbox import PFIExplainer
    pfi_explainer = PFIExplainer(model = loan_model,
    features=[‘loan_amount’,’income’,’age’,’marital_status’],
    classes=[‘reject’, ‘approve’])

Question 6)
You need to retrieve local feature importance from a TabularExplainer.
How can you code this in the SDK?

  • local_tab_explanation = tab_explainer.explain_local(X_test[0:5])
    local_tab_features = local_tab_explanation.get_ranked_local_names()
    local_tab_importance = local_tab_explanation.get_ranked_local_values()

Question 7)
Which packages do you need to install in the run environment to be able to create an explanation in the experiment script? Select all that apply.

  • azureml-contrib-interpret
  • azureml-blackbox
  • azureml-interpret
  • azureml-explainer

 


WEEK 6 QUIZ ANSWERS

 

Knowledge check

Question 1)
In cases where certain groups are overrepresented or data is skewed, which potential cause of disparity applies here?

  • Data imbalance
  • Indirect correlation
  • Societal bias

Question 2)
When attempting to mitigate bias, if you want to differentiate features that are directly predictive from features that encapsulate complex or nuanced relationships, which strategy should you apply?

  • Balance training and validation data
  • Extensive feature selection and engineering analysis.
  • Evaluate models for disparity based on significant features.

Question 3)
True or false?
Applying a trade-off overall predictive performance between sensitive feature groups means that a model that is 97.5% accurate with similar performance across all groups can be more desirable than a model that is 99.9% accurate but has a bias against a group.

  • True
  • False.

Question 4)
Which mitigation algorithm is a post-processing technique that applies a constraint to an existing classifier and transforms the prediction as appropriate?

  • Threshold Optimizer
  • Grid Search
  • Exponentiated Gradient

Question 5)
Which mitigation algorithms support binary classification and regression models? Select all that apply.

  • Threshold Optimizer
  • Grid Search
  • Exponentiated Gradient

 

Knowledge check

Question 1)
True or false?
Application Insights can also be used to monitor applications that are not running in Azure.

  • True
  • False

Question 2)
What method of the Workspace object can you use in the SDK to determine the Application Insights associated resource?

  • get_overview()
  • get_details()
  • get_associations()
  • get_attachments ()

Question 3)
In order to capture telemetry data for Application Insights, what statement can you use to write values to the standard output log in the scoring script of your service?

  • print
  • print_output
  • write
  • write_output

Question 4)
From the Azure Portal, which blade of the Application Insights will allow you to query log data?

  • Log Analytics
  • Activity logs
  • Smart Detection
  • Metrics

Question 5)
What query language is used to query log data in App Insights?

  • Object Query Language (OQL)
  • Concept-Oriented Query Language (COQL)
  • Standard Query Language (SQL)
  • Kusto Query Language (KQL)

 

Knowledge check

Question 1)
What is Azure Machine Learning using to monitor data drift?

  • Datastores
  • Datasets
  • Databases

Question 2)
Which are the two dataset types you need to register to monitor data drift? Select all that apply.

  • Target dataset
  • Baseline dataset
  • Primary dataset
  • Scope dataset

Question 3)
After creating the registered datasets, what can you define to detect data drift and trigger alerts?

  • Dataset analyzer
  • Dataset metrics
  • Dataset monitor
  • Dataset logger

Question 4)
After creating the dataset monitor, what function can you use to compare the baseline dataset to the target dataset?

  • Backload
  • Backtrack
  • Backflow
  • Backfill

Question 5)
What are the timeframes that can be configured to run a data drift monitor schedule? Choose all that apply.

  • Hourly
  • Daily
  • Monthly
  • Weekly

 

Test prep Quiz Answers

Question 1)
Which parity constraint can be used with any of the mitigation algorithms to minimize disparity in the selection rate across sensitive feature groups?

  • Error rate parity
  • Bounded group loss
  • Demographic parity
  • Equalized odds

Question 2)
Which parity constraint can be used with any of the mitigation algorithms to minimize disparity in combined true positive rate and false_positive_rate across sensitive feature groups?

  • Error rate parity
  • False-positive rate parity
  • True positive rate parity
  • Equalized odds

Question 3)
You are training a binary classification model to determine who should be targeted in a marketing campaign.
How can you assess if the model is fair and will not discriminate based on ethnicity?

  • Remove the ethnicity feature from the training dataset.
  • Compare disparity between selection rates and performance metrics across ethnicities.
  • Evaluate each trained model with a validation dataset, and use the model with the highest accuracy score. An accurate model is inherently fair.

Question 4)
When deploying a new real-time service, Application Insights can be enabled in the deployment configuration for the service.
How would you code that using the SDK?

  • dep_config = AciWebservice.deploy_configuration(cpu_cores = 1,
    memory_gb = 1,
    enable_app_insights=True)

Question 5)
For web services that have already been deployed, you can update them and enable the Application Insight using the Azure ML SDK.
How would you code that?

  • service = ws.webservices[‘my-svc’]
    service.update(enable_app_insights=True)

Question 6)
You want to back fill a dataset monitor based on monthly changes in data for the previous 3 months. How would you code that in the SDK?

  • import datetime as dt
    backfill = monitor.backfill( dt.datetime.now() – dt.timedelta(months=3), dt.datetime.now()) 
  • import datetime as dt
    backfill = monitor.backfill( dt.datetime.now(), dt.timedelta(months=3), dt.datetime.now()) 
  • import datetime as dt
    backfill = monitor_backfill( dt.datetime.now(), dt.timedelta(months=3), dt.datetime.now()) 
  • import datetime as dt
    backfill = monitor_backfill( dt.datetime.now() – dt.timedelta(months=3), dt.datetime.now())

Question 7)
You want to schedule a data drift monitor to run every day and send an alert if the drift magnitude is greater than 0.2. How would you code that in Python?

  • alert_email = AlertConfiguration(‘[email protected]’)
    monitor = DataDriftDetector.create_from_datasets(ws, ‘dataset-drift-detector’,
    baseline_data_set, target_data_set,
    compute_target=cpu_cluster,
    frequency=’Day’, latency=2,
    drift_threshold=.2,
    alert_configuration=alert_email)