Huawei
H13-311_V3.5
Q1:
Which of the following is NOT a key feature that enables all-scenario deployment and collaboration for MindSpore?
○
A
Data and computing graphs are transmitted to Ascend AI Processors.○
B
Federal meta-learning enables real-time, coordinated model updates between different devices, and across the device and cloud.○
C
Unified model IR delivers a consistent deployment experience.○
D
Graph optimization based on a software-hardware synergy shields the differences between scenarios.
Huawei
H13-311_V3.5
Q2:
When using the following code to construct a neural network, MindSpore can inherit the Cell class and rewrite the __init__ and construct methods.
○
A
TRUE○
B
FALSE
Huawei
H13-311_V3.5
Q3:
The core of the MindSpore training data processing engine is to efficiently and flexibly convert training samples (datasets) to MindRecord and provide them to the training network for training.
○
A
TRUE○
B
FALSE
Huawei
H13-311_V3.5
Q4:
All kernels of the same convolutional layer in a convolutional neural network share a weight.
○
A
TRUE○
B
FALSE
Huawei
H13-311_V3.5
Q5:
Which of the following statements is false about gradient descent algorithms?
○
A
Each time the global gradient updates its weight, all training samples need to be calculated.○
B
When GPUs are used for parallel computing, the mini-batch gradient descent (MBGD) takes less time than the stochastic gradient descent (SGD) to complete an epoch.○
C
The global gradient descent is relatively stable, which helps the model converge to the global extremum.○
D
When there are too many samples and GPUs are not used for parallel computing, the convergence process of the global gradient algorithm is time-consuming.