How Nectir AI Course Assistants Actually Work (and Why Your Data Stays Yours)

Kavitta Ghai
May 19, 2025

Generative AI tools are rushing into classrooms faster than any edtech wave we have seen, yet many educators remain understandably cautious, asking critical questions: What exactly happens after I upload my course files? How does my AI Assistant generate answers—and what control do I have over what it can and cannot say?

In this blog post, we’ll demystify exactly how Nectir AI Course Assistants work from start to finish, providing you with clear and actionable AI literacy so you can confidently setup AI to work for your classroom in the best way possible. You’ll understand precisely how your uploaded materials stay private, how answers are generated using different "Knowledge Settings," and how to ensure academic integrity is maintained at every step. By the end, you'll be fully equipped not only to reassure skeptical colleagues and students, but also to confidently customize and manage your AI Assistants—turning them into powerful, personalized educational tools that align precisely with your teaching goals.

We wrote this post for instructors who already use—or are about to use—Nectir AI and need a clear, shareable explanation of the plumbing beneath the chat window. We understand that transparency behind the architecture and safety of your AI tools matters to you and your students. It governs what kinds of documents you feel safe sharing, how your AI Assistant retrieves an accurate answer, and why your data stays inside a private container rather than disappearing into the vast training corpus of a commercial LLM somewhere deep in the halls of OpenAI.

First, a Quick AI Literacy Refresher

To fully grasp Nectir AI’s approach to data safety and accuracy, it helps to understand some basics about Generative AI and Large Language Models (LLMs), such as GPT-4o and Claude 3.5.

  • Large Language Models (LLMs) are powerful AI systems trained on vast amounts of internet data. They're excellent at understanding language, context, and generating detailed, human-like responses.
    • The LLMs Nectir AI uses currently are GPT-4o/4.1 and Claude 3.5. We are always adding new models once we’ve tested them for optimal accuracy in an educational setting.
  • Training vs. Querying: When you hear about formally "training" a foundational AI model, like GPT-4o, it usually refers to the initial process of feeding it huge datasets to teach it general knowledge. In contrast, "querying" refers to the AI simply using existing knowledge to respond to questions—it’s not learning from new data or storing that data permanently. That's exactly what we're doing inside of Nectir AI—querying your specific AI Assistant's knowledge base created from your uploaded course materials.

Training Your Nectir AI Assistant ≠ Model-Training

When a student asks your AI Assistant a question, Nectir AI retrieves only the snippets of your uploaded materials that are relevant to that topic, passes them through a private GPT-4o or Claude 3.5 endpoint, and returns an answer that it deems satisfactory according to its prompt. Nothing you upload to the knowledge base of your AI Assistant ever leaves the walled garden of Nectir AI, and nothing you upload is ever used to formally train those large language models.

So when you “train” your Nectir AI Assistant on your course documents, you are not actually re-training GPT-4o or Claude in the literal sense. Instead, Nectir does the following:

  1. Ingests your files—syllabus, textbook PDFs, rubrics, slide decks, lecture recordings, etc.
  2. Splits them into bite-sized chunks—your files are sliced into paragraph-sized “chunks” and converted into mathematical fingerprints (these are called vector embeddings).
  3. Stores them in a private vector database—those fingerprints now live in an encrypted database inside Nectir’s Azure tenant; never even touching OpenAI or Anthropic servers.
  4. Tags it only to your Assistant—only your specific AI Assistant can retrieve those chunks (aka your uploaded files), unless you explicitly share your Assistant to another course within Nectir AI.

Your content never reaches OpenAI or Anthropic servers and is never mixed into their base models. That’s guaranteed by our secure, sandboxed endpoints—private API gateways Microsoft Azure provisions so that payloads stay isolated from consumer traffic.

Bottom line: your content becomes a searchable mini-library that stays inside Nectir’s walls, away from public LLM training pipelines.

So what exactly happens when a student asks a question to a Nectir AI Assistant?

Understanding Your Nectir AI Assistant’s Knowledge Settings

Nectir AI’s Knowledge Settings are the dial that lets you decide how much the large-language model thinks for itself versus how strictly it must anchor every answer in the materials you uploaded. That dial matters because it governs both pedagogical fidelity and applicability to real world scenarios. An AI Course Assistant for a class about American history may welcome the LLM’s broader context for the Civil War’s current day implications; an AI Test Prep Assistant for a summative assessment on your unpublished lab manual definitely should not. For every Nectir AI Assistant you create, you will choose among three modes—General Knowledge, Topic Knowledge, and Document Only. You can change this setting anytime in the Knowledge tab of your Assistant’s settings. This section will help you decide when to select each option. You can also review these instructions in detail in our Support Guide here.

You have complete control over the extent to which the LLM uses its own extensive knowledge versus exclusively your uploaded documents. Nectir AI offers three clear modes for this:

How do you know when to use each Knowledge setting?

  • General Knowledge switches on the full breadth of GPT-4o or Claude alongside your documents, effectively giving students what feels like a well-briefed teaching assistant that can weave external facts or analogies into its explanations. Because the model is still fenced in by your prompt and retrieval engine, it will cite your files first, but it may round out an answer with details drawn from its pre-training.
    • This is ideal for open-ended discussion that allows students to be infinitely curious, exploratory Q&A that goes past course topics, or interdisciplinary classes where connecting dots beyond the textbook enriches the learning experience.
  • Topic Knowledge is the middle gear (and the one we most often recommend). The Assistant can lean on model knowledge only when that knowledge overlaps with the themes present in your documents, the current prompt, or the student’s question. Think of it as letting the model “fill in the cracks” without wandering too far off-topic.
    • Use it when you want richer conversations yet still prefer answers to stay tightly coupled to course scope—e.g., when clarifying foundational chemistry concepts that your slides mention but do not define in depth or when students want to connect the concepts they’re learning to how they will likely use it in a real job setting.
  • Document Only locks the Assistant inside the four walls of your uploaded files. The retrieval layer feeds the LLM nothing but those passages; if the answer is not in the documents, the Assistant says so and won’t be able to answer the question.
    • This option relies only on the data you upload. Select this mode for any context where originality and academic integrity are paramount—a graduate dissertation, lengthy case studies in MBA courses, campus-specific financial aid documentation, etc.

You can mix and match these settings across your various Nectir AI Assistants even within the same course—Document Only for the Case Study Roleplay Partner, Topic Knowledge for the Midterm Study Buddy, General Knowledge for the Career Coach. Mastering this single setting empowers you to calibrate every Assistant to your exact learning goals while ensuring that your students are getting the most accurate and relevant information every time they engage with AI.

That’s the nuts-and-bolts tour of how your Assistant thinks and why those Knowledge settings matter. Armed with this, you can upload with confidence, pick the right mode for the moment, and explain to students exactly what’s happening behind the screen. We’ll keep rolling out bite-sized AI-literacy posts like this—so you, your colleagues, and your learners stay fully informed as the tech keeps evolving.

Meanwhile, jump into our Discourse community to swap tips, prompts, and best-practice stories with instructors nationwide who are also building on Nectir AI. See you there, and happy prompting!

Kavitta Ghai
May 19, 2025

This is the future of education.

Join over 45,000+ students, faculty, and staff using Nectir.