Module 7 Quiz — RAG Pipelines

Module 7 Quiz — RAG Pipelines#

Loading quiz...

Answer Key (Instructor Reference)#

MCQ Answers#

#

Topic

Answer

1

Why RAG

LLMs have no access to your data at runtime

2

LLM behavior

Confidently fabricates plausible-sounding answers

3

RAG definition

An architectural pattern

4

RAG capabilities

Provides evidence at runtime without changing the model

5

RAG vs fine-tuning

When you need answers grounded in specific, changing documents

6

RAG + fine-tuning

Fine-tuning teaches HOW; RAG provides WHAT

7

Core components

Retriever, Prompt Builder, Generator, Validator

8

Component separation

Each component can be tested and swapped independently

9

Retriever role

Find relevant chunks from the knowledge base

10

Prompt builder role

Structures retrieved chunks and question into a prompt

11

Data flow

Vector similarity search / retrieval

12

Normalized embeddings

To enable cosine similarity via inner product

13

Parameter k

The number of documents to retrieve

14

Trade-off k

More context vs. more noise

15

Evidence-first

Placing retrieved context before the question with explicit grounding instructions

16

RAG prompt instruction

If the context doesn’t contain the answer, say so

17

Chunk formatting

Format them clearly with labels (e.g., [1], [2])

18

Near-miss

A chunk that is semantically similar but factually different

19

Near-miss danger

They cause high confidence in wrong answers

20

Grounded hallucination

A wrong answer based on incorrect retrieved context

21

No relevant chunks

Refuse to answer

22

Score threshold

Rejects retrieved chunks below a similarity threshold

23

Enterprise refusal

In regulated environments, refusal is risk management

24

Context overflow

Too many chunks causing the model to lose focus

25

Precision@k

Proportion of retrieved chunks that are relevant

26

Faithfulness

Whether the answer only uses information from the provided context

27

Caching

Document embeddings, query embeddings, and optionally retrieval results

28

Latency dominant

LLM generation

29

Audit trail

Query, retrieved chunk IDs, scores, prompt, response, and timestamps

30

Auditability importance

To support debugging, compliance, and trust

31

Platform RAG

Supports self-service corpora, configurable policies, and full observability

32

RAG limitations

Guarantee correctness

Written Question Themes#

#

Topic

Key Themes Expected

1

Why RAG needed

Runtime access to private/current data, hallucination prevention

2

RAG vs fine-tuning

RAG for data, fine-tuning for behavior

3

Architectural pattern

Components, testing, swappability

4

Data flow

Query → embed → retrieve → prompt → generate → validate

5

Near-miss

High similarity, wrong facts, dangerous

6

Failure modes

Near-miss, missing chunks, conflicts, overflow

7

Refusal as feature

Risk management, regulated industries

8

Prompt builder

Evidence-first, grounding, structure

9

Guardrails

Thresholds, validation, filtering

10

k trade-off

Context vs noise, filtering

11

Retrieval evaluation

Precision@k, relevance, MRR

12

Faithfulness

Context-only, no invention

13

Audit trails

Components, compliance, debugging

14

Caching strategy

What to cache, invalidation

15

Latency breakdown

LLM dominates, caching implications

16

Risk shift

Retrieval errors vs model hallucination

17

RAG platform

Multi-tenant, governance, observability

18

Regulated RAG

Extra guardrails, compliance, auditability