Skip to content

Conversation

@EricThomson
Copy link
Collaborator

Intro for week 2 of AI module: closes out #62


This is why hallucinations happen: the model fills gaps with confident guesses, misremembers facts, or invents details that fit the conversation. Even when the model is trying to be helpful, it may prioritize what it thinks you want to hear over what is actually correct. And because LLMs are trained on data from the past, with a fixed cutoff date, they rapidly fall out of sync with current information.

When we want an LLM to behave more like a truth-tracking assistant rather than a storyteller, we need to augment its knowledge. Over the last few years, three main approaches have emerged: prompt engineering, fine-tuning, and retrieval-augmented generation (RAG). Each approach has its place, and professional AI systems often use more than one. If you want an LLM to be more truthful, more up-to-date, and more grounded in real data, these are the tools you will reach for.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

" If you want an LLM to be more truthful, more up-to-date, and more grounded in real data, these are the tools you will reach for."
I think this part could be rephrased because I do not understand it

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

" If you want an LLM to be more truthful, more up-to-date, and more grounded in real data, these are the tools you will reach for." I think this part could be rephrased because I do not understand it

Thanks that is helpful -- I will expand on that!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants