Module 3 - LLM Fundamentals Assessment#
Total Points: 20
Assessment Instructions#
Complete all exercises in this notebook
Do NOT rename any functions - the grader expects specific names
Ensure your notebook runs without errors before submission
LLM Gateway Setup#
We recommend running Ollama locally — it’s a valuable learning experience and you’ll understand LLMs much better than just calling an API.
Option A: Local Ollama (Recommended)#
LLM_BASE_URL = "http://localhost:11434" # Or your pinggy tunnel URL
LLM_API_KEY = None # → Uses /api/chat
Option B: Server Gateway (Fallback)#
LLM_BASE_URL = "https://jbchat.jonbowden.com.ngrok.app"
LLM_API_KEY = "your-api-key" # → Uses /chat/direct
# ===== LLM GATEWAY CONFIGURATION =====
# Try Option A first! Only use Option B if you can't run Ollama locally.
# ------ OPTION A: Local Ollama (Recommended) ------
LLM_BASE_URL = "http://localhost:11434" # Or your pinggy tunnel URL
LLM_API_KEY = None # No API key → uses /api/chat
# ------ OPTION B: Server Gateway (Fallback) ------
# LLM_BASE_URL = "https://jbchat.jonbowden.com.ngrok.app"
# LLM_API_KEY = "<provided-by-instructor>" # API key → uses /chat/direct
# ------ Model configuration ------
DEFAULT_MODEL = "phi3:mini"
import requests
import json
import re
Exercise 1 - Basic LLM Caller (4 points)#
Create a function call_llm that:
Takes parameters:
prompt(str),temperature(float, default 0.0)Auto-detects endpoint based on
LLM_API_KEY:If API key is set: use
/chat/directIf API key is
None: use/api/chat
Sends JSON payload with:
model,messages,temperature,stream(False)Includes headers:
Content-Type,ngrok-skip-browser-warning,Bypass-Tunnel-ReminderIf API key is set, include
X-API-KeyheaderReturns the response text (extracted from
message.content)
Requirements:
Use
requests.post()to make the HTTP callUse
DEFAULT_MODELfor the model nameFormat messages as:
[{"role": "user", "content": prompt}]Return the text content, not the full JSON response
# Exercise 1: Implement call_llm function
def call_llm(prompt: str, temperature: float = 0.0) -> str:
"""Call an LLM endpoint via /api/chat and return the response text."""
# YOUR CODE HERE
pass
# Test your function (optional)
# result = call_llm("Say hello in one word.")
# print(result)
Exercise 2 - Extract Response Text (3 points)#
Create a function get_response_text that:
Takes
prompt(str) as inputCalls
call_llmwith temperature=0.0Returns the response text (your
call_llmshould already return text)Returns an empty string if an error occurs
Requirements:
Must use your
call_llmfunctionHandle any exceptions gracefully
Return type must be
str
# Exercise 2: Implement get_response_text function
def get_response_text(prompt: str) -> str:
"""Call LLM and return just the response text."""
# YOUR CODE HERE
pass
# Test your function (optional)
# text = get_response_text("What is 2+2? Answer with just the number.")
# print(text)
Exercise 3 - JSON Output Parser (4 points)#
Create a function parse_json_response that:
Takes
prompt(str) as inputCalls
get_response_textto get the LLM outputStrips markdown code block wrappers (LLMs often return
```json ... ```)Attempts to parse the response as JSON
Returns a tuple:
(success: bool, result: dict or str)If parsing succeeds:
(True, parsed_dict)If parsing fails:
(False, error_message)
Requirements:
Handle markdown-wrapped JSON (strip
```jsonand```before parsing)Use
json.loads()for parsingUse try/except to handle
json.JSONDecodeErrorThe error message should describe the failure
Hint: Use regex or string methods to detect and strip markdown code blocks before calling json.loads().
# Exercise 3: Implement parse_json_response function
def parse_json_response(prompt: str) -> tuple:
"""Call LLM, attempt to parse response as JSON, return (success, result)."""
# YOUR CODE HERE
pass
# Test your function (optional)
# success, result = parse_json_response('Return ONLY valid JSON: {"test": 123}')
# print(f"Success: {success}, Result: {result}")
Exercise 4 - Temperature Comparison (4 points)#
Create a function compare_temperatures that:
Takes
prompt(str) as inputCalls
call_llmtwice with the SAME prompt:Once with temperature=0.0
Once with temperature=0.8
Returns a dictionary with keys:
"low_temp": response text from temperature=0.0"high_temp": response text from temperature=0.8"are_identical": boolean indicating if both responses are exactly the same
Requirements:
Use your
call_llmfunction (which returns text directly)Compare strings exactly for
are_identical
# Exercise 4: Implement compare_temperatures function
def compare_temperatures(prompt: str) -> dict:
"""Compare LLM responses at different temperatures."""
# YOUR CODE HERE
pass
# Test your function (optional)
# result = compare_temperatures("List 3 fruits.")
# print(result)
Exercise 5 - Structured Prompt Builder (5 points)#
Create a function build_structured_prompt that:
Takes parameters:
system_instruction(str): behavior rules for the LLMtask(str): what the LLM should doconstraints(list of str): output constraints
Returns a formatted prompt string with clearly labeled sections:
SYSTEM: {system_instruction} TASK: {task} CONSTRAINTS: - {constraint1} - {constraint2}
Requirements:
Each constraint should be on its own line, prefixed with
"- "Sections must be separated by blank lines
Use uppercase labels: SYSTEM, TASK, CONSTRAINTS
# Exercise 5: Implement build_structured_prompt function
def build_structured_prompt(system_instruction: str, task: str, constraints: list) -> str:
"""Build a structured prompt with system, task, and constraints."""
# YOUR CODE HERE
pass
# Test your function (optional)
# prompt = build_structured_prompt(
# "You are a helpful assistant.",
# "Explain Python lists.",
# ["Use simple language", "Maximum 2 sentences"]
# )
# print(prompt)
Submission#
Before submitting:
Restart kernel and Run All Cells to ensure everything works
Verify all functions are defined and return correct types
Save the notebook
Submit your completed notebook via the Module 3 Assessment Form.