Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Lesson 12 — AI Literacy: Prompting, Verification, Hallucinations, and Prompt Logs

(from “asking questions” to reliable workflows; a taste of agentic loops)

Why this matters (motivation)

By now you can:

Generative AI can help you do these tasks faster. But it can also:

This week makes sure you can use AI tools responsibly and effectively—in a way that faculty (and future employers) will accept.


Part A — What “AI literacy” means in this course

What AI is good at (in our workflow)

What AI is bad at (common risks)


Part B — Prompting patterns that produce better work

Pattern 1: Ask for assumptions first

Instead of:

“Analyze this dataset.”

Try:

“What assumptions do you need to make to analyze this dataset responsibly? List them and suggest checks.”

Pattern 2: Ask for a plan, not a final product

“Propose a step-by-step workflow (EDA → cleaning → model → diagnostics → communication) for this question.”

Pattern 3: Ask for checks and failure modes

“List the top 5 ways this analysis could be misleading and how to test each.”

Pattern 4: Ask for alternatives

“Give two alternative model specifications and explain when each is appropriate.”

Pattern 5: Force careful language

“Write conclusions using association language and include one limitation and one potential confounder.”


Part C — Verification: a practical checklist

1) Code verification (fast sanity checks)

2) Data verification

3) Claim verification


Part D — Hallucinations: what they look like in practice

Hallucination type 1: Fake citations

AI may produce:

Rule: only cite papers you can actually locate and verify.

Hallucination type 2: Wrong-but-plausible code

AI code may:

Hallucination type 3: Overconfident interpretation

AI often writes strong language:


Part E — Prompt & Workflow Logs (the course standard)

Minimum required fields

  1. Task context (what you were trying to do)

  2. Prompt(s)

  3. AI output snippet (short)

  4. What you changed (edits)

  5. Verification steps (sanity checks)

  6. Final decision (what you accepted or rejected)


Part F — A “taste” of agentic workflows (human-in-the-loop)

Many people use the term “agentic AI” to mean AI systems that:

In this course, we use a safe version: Plan → Execute → Check → Iterate, where you execute the steps and approve changes.


Mini-lab (Google Colab)

In-class checkpoints (prompting)

  1. Choose one task from your capstone workflow:

    • EDA checklist for your dataset

    • cleaning plan

    • regression specification

    • clustering plan

    • forecasting baseline plan

  2. Write a “bad prompt” and a “good prompt” for the same task.

  3. Compare outputs and explain why the “good prompt” is better.

In-class checkpoints (verification)

  1. Take one AI-generated code snippet and run it.

  2. Perform at least 3 verification checks:

    • check shapes/dtypes

    • check missingness changes

    • check basic summary stats

    • compare to a baseline result

  3. Identify at least one failure or risk (even if minor) and fix it.

In-class checkpoints (hallucination hunt)

  1. You will be given a short AI-generated paragraph (or code) that contains 2–3 issues.

  2. Find and label issues (fake citation? wrong claim? leakage? wrong column?).

  3. Rewrite the paragraph with correct language and at least one caveat.

In-class checkpoints (agent-like loop)

  1. Run one full loop:

Submission (after class)


AI check (meta)


Review questions (quiz / reflection)

  1. Name three common AI failure modes in data analysis.

  2. What is one verification check you should always do after cleaning data?

  3. Why is it risky to ask AI for citations?

  4. What is the safe “agent-like loop” used in this course?