Looking for 365.114/246/247/248/321/322/323/324/360/361, UE Deep Learning: Architectures and Generative Techniques, Richard Freinschlag et al., 2026S test answers and solutions? Browse our comprehensive collection of verified answers for 365.114/246/247/248/321/322/323/324/360/361, UE Deep Learning: Architectures and Generative Techniques, Richard Freinschlag et al., 2026S at moodle.jku.at.
Get instant access to accurate answers and detailed explanations for your course questions. Our community-driven platform helps students succeed!
You are given a standard 3×3 convolution with 5 input and output channels. How many parameters would you save if you exchanged it with a depthwise-separable equivalent (depthwise 3×3 + pointwise 1×1)? Enter the difference in the field below.
Only enter the whole number without any separators, e.g. 1 or 10
You are feeding a 32x32 RGB image into an MLP-Mixer with a patch size of 8. How many patches will be created by the network?
Just enter the whole number without any separator as an integer.
What is the most efficient way to store the result res obtained by executing the function below, in a python list, arr, given that this function will be called iteratively?
@torch.enable_grad()
def computation(net: nn.Module, x: torch.Tensor):
res = net(x).mean()
return res
Which of the following tricks can be used to create a skip-connection when the inputs and outputs have different dimensions (num_in, num_out, respectively)? Select all that apply.
Which of the following layers allows you to conveniently chain layers that consume a single input and produce a single output?
What is the most basic class you can inherit from when implementing a neural network in PyTorch? Here, basic means introducing as little overhead as possible.
The following function trains a neural network for one epoch. The body of the inner loop consists of 5 statements, listed below in scrambled order. Put them in the correct execution order (1. = first, 5.=last).
Ignore optimizer.zero_grad() and code indentation for this example.
def update(network: nn.Module, data: DataLoader, loss: nn.Module,optimizer:optim.Optimizer):
When applying Transfer Learning to a pre-trained network in PyTorch, how do you "freeze" the feature extraction layers so their weights remain untouched during optimization?
How do you properly ensure that a PyTorch model's Dropout and BatchNorm modules behave correctly during the evaluation/testing phase?
When iterating over a PyTorch DataLoader, what is its main function during network training?