Module 4 Assessment — ML & DL Foundations#
This assessment tests both your conceptual understanding (written tasks) and practical skills (coding tasks).
Assessment Structure#
3 Written Tasks (55 points): Explain concepts in your own words
3 Coding Tasks (45 points): Implement ML fundamentals in Python
Instructions#
Written tasks: Fill in the string variables with full sentences
Coding tasks: Complete the functions with the exact signatures shown
Do not rename variables or functions
Ensure the notebook runs top-to-bottom without errors
You may use the module content for reference
Task 1 — Concept Mapping (15 points) [Written]#
Prompt: Explain the relationship between AI, ML, DL, and LLMs.
Include:
The subset chain (AI → ML → DL)
Where LLMs sit (DL + Generative AI)
One sentence on why this matters in enterprise settings
Write 5–8 sentences.
concept_mapping = """
"""
Task 2 — Loss Calculation (15 points) [Coding]#
Implement a function that calculates the Mean Squared Error (MSE) loss.
MSE = (1/n) × Σ(predicted - actual)²
def calculate_mse(predictions, actuals):
"""
Calculate Mean Squared Error between predictions and actual values.
Args:
predictions: list of predicted values
actuals: list of actual values (same length as predictions)
Returns:
float: the mean squared error
"""
Example:
calculate_mse([10, 20, 30], [12, 18, 33]) # Returns 6.0
# Because: ((10-12)² + (20-18)² + (30-33)²) / 3 = (4 + 4 + 9) / 3 = 17/3 ≈ 5.67
def calculate_mse(predictions, actuals):
"""
Calculate Mean Squared Error between predictions and actual values.
Args:
predictions: list of predicted values
actuals: list of actual values (same length as predictions)
Returns:
float: the mean squared error
"""
# YOUR CODE HERE
pass
Task 3 — How Learning Works (20 points) [Written]#
Prompt: Explain how a neural network learns during training.
Include:
Loss function (error signal)
Gradient descent (minimising loss)
Backpropagation (error flowing backward)
Learning rate (step size; too large → instability)
Convergence (what it means and what it doesn’t guarantee)
You may find it helpful to think in terms of the “hiker in fog” analogy from the module.
Write 7–12 sentences.
learning_mechanics = """
"""
Task 4 — Gradient Descent Step (15 points) [Coding]#
Implement a single gradient descent update step for a simple linear model.
For a model prediction = weight × input, the gradient of MSE loss with respect to the weight is:
gradient = (2/n) × Σ((prediction - actual) × input)
The weight update is: new_weight = old_weight - learning_rate × gradient
def gradient_descent_step(weight, inputs, actuals, learning_rate):
"""
Perform one gradient descent step for a simple linear model (no bias).
Args:
weight: current weight value (float)
inputs: list of input values
actuals: list of actual target values
learning_rate: step size (float)
Returns:
float: the updated weight after one gradient descent step
"""
Example:
gradient_descent_step(3.0, [1, 2], [2, 4], 0.1) # Returns 2.5
# The true relationship is: actual = 2 × input (so optimal weight = 2)
# Current predictions: [3×1, 3×2] = [3, 6]
# Errors (pred - actual): [3-2, 6-4] = [1, 2]
# Gradient = (2/2) × (1×1 + 2×2) = 1 × 5 = 5
# New weight = 3 - 0.1 × 5 = 2.5 (moved closer to optimal!)
def gradient_descent_step(weight, inputs, actuals, learning_rate):
"""
Perform one gradient descent step for a simple linear model (no bias).
Args:
weight: current weight value (float)
inputs: list of input values
actuals: list of actual target values
learning_rate: step size (float)
Returns:
float: the updated weight after one gradient descent step
"""
# YOUR CODE HERE
pass
Task 5 — Simple Neuron (15 points) [Coding]#
Implement a single neuron with ReLU activation.
A neuron computes:
Weighted sum:
z = Σ(input_i × weight_i) + biasActivation:
output = ReLU(z) = max(0, z)
def simple_neuron(inputs, weights, bias):
"""
Compute the output of a single neuron with ReLU activation.
Args:
inputs: list of input values
weights: list of weights (same length as inputs)
bias: bias term (float)
Returns:
float: neuron output after ReLU activation
"""
Example:
simple_neuron([1, 2], [0.5, -0.5], 0.1) # Returns 0.1
# Because: (1×0.5 + 2×(-0.5)) + 0.1 = 0.5 - 1 + 0.1 = -0.4 → ReLU(-0.4) = 0
simple_neuron([1, 2], [0.5, 0.5], 0.1) # Returns 1.6
# Because: (1×0.5 + 2×0.5) + 0.1 = 1.5 + 0.1 = 1.6 → ReLU(1.6) = 1.6
def simple_neuron(inputs, weights, bias):
"""
Compute the output of a single neuron with ReLU activation.
Args:
inputs: list of input values
weights: list of weights (same length as inputs)
bias: bias term (float)
Returns:
float: neuron output after ReLU activation
"""
# YOUR CODE HERE
pass
Task 6 — LLM Behaviour + Grounding (20 points) [Written]#
Prompt: Based on your understanding of ML/DL, explain why LLM hallucinations are expected and why grounding (RAG) helps.
Include:
Hallucination as pattern completion / over-generalisation
Connection to training data and next-token prediction
Why enterprise settings require evidence and auditability
Grounding/retrieval/RAG as mitigation (evidence + context)
Write 7–12 sentences.
llm_grounding_reflection = """
"""
Submission#
Before submitting:
Restart kernel and Run All Cells to ensure everything works
Verify all functions are defined and return correct types
Verify all written responses are complete and in your own words
Save the notebook
How to Download from Colab#
Go to File → Download → Download .ipynb
The file will download to your computer
Do not rename the file — keep it as
Module4_Assessment.ipynb
Submit#
Upload your completed notebook via the Module 4 Assessment Form.
Submission Checklist#
All written variables filled with thoughtful explanations in your own words
All coding functions implemented and working
Notebook runs top-to-bottom without errors
Downloaded as .ipynb (not edited in a text editor)
File not renamed