logo

Crowdly

Browser

Add to Chrome

365.221/2/4/6/7/8/9/30/56/67/81/325/326/348/349, UE Hands-on AI II, Rainer Dangl et al., 2026S

Looking for 365.221/2/4/6/7/8/9/30/56/67/81/325/326/348/349, UE Hands-on AI II, Rainer Dangl et al., 2026S test answers and solutions? Browse our comprehensive collection of verified answers for 365.221/2/4/6/7/8/9/30/56/67/81/325/326/348/349, UE Hands-on AI II, Rainer Dangl et al., 2026S at moodle.jku.at.

Get instant access to accurate answers and detailed explanations for your course questions. Our community-driven platform helps students succeed!

Try all activation functions once. Which one is the worst choice with regard to the vanishing gradient issue?

100%
0%
0%
0%
0%
0%
View this question

Now train with the following settings:

  • seed: 2026
  • learning rate 0.001
  • activation: sigmoid
  • hidden: 3072
  • epochs: 5
  • model depths: all
  • image size: 32
  • batch size: 32

Look at the loss plot. For which model depth(s) can you see that there is some learning taking place? 

0%
0%
100%
100%
0%
View this question

Why do you think that even after switching the activation function, some model depths don't seem learn well?

View this question

Why do you think that is the case? It might be helpful to check out the activation function and their derivative plot on the first tab.

View this question

Why can the gradient vanish during backpropagation?

0%
0%
100%
0%
View this question

In the Demo: one SGD step, set the random seed to 2026 and the learning rate to 0.1 then run the demo.

The initial weights should be:

Initial weights:

tensor([[ 0.3753, 0.1500],

[ 0.1319, -0.6104]])

Initial biases:

tensor([ 0.0136, -0.3036])

The gradients should be:

Gradient computation:

grad(W):

tensor([[ 0.2432, -0.1886],

[-0.2432, 0.1886]])

grad(b):

tensor([ 0.0957, -0.0957])

grad(W) norm: 0.4351941645145416

grad(b) norm: 0.13534791767597198

The updated weigths and biases should be:

Updated weights:

tensor([[ 0.3510, 0.1689],

[ 0.1562, -0.6293]])

Updated biases:

tensor([ 0.0041, -0.2940])

Can you explain how these updated weights and biases are calculated? Write down the formula with the for the complete computation for 

initial weight -> updated weight

Also give an example computation for one of the parameters.

View this question

Train for 5 epochs with these settings:

  • learning rate=0.01
  • momentum=0.9
  • seed=0
  • no early stopping

What is the overall accuracy (enter full number with all three digits after the comma)?

View this question

Select the Street View House Numbers (SVHN) dataset. Select:

  • batch size=32
  • no augmentation
  • validation split=0.1
  • use official test set

Load the CIFAR10 preset and apply the architecture. How many trainable parameters does the model have?

View this question

Now try to increase the accuracy to at least 80% overall. You can:

  • modify the architecture
  • epochs
  • learning rate
  • momentum
  • use early stopping

Include screenshots of

  • the PyTorch model definiton
  • the loss plot with the progress bar and selected model info
  • the normalized confusion matrix and the accuracy info
  • the prediction scores and class labels of 8 samples ('CNN: Predict' tab)

Do you see misclassified samples in the prediction scores? If yes, what can be said about them when looking at the top 3 probabilities that are listed?

View this question

What is the overall accuracy of the model?

View this question

Want instant access to all verified answers on moodle.jku.at?

Get Unlimited Answers To Exam Questions - Install Crowdly Extension Now!

Browser

Add to Chrome