As the first GenAI co-pilot for employee learning and training, we get a lot of questions about how GenAI works. So we thought we'd write a blog about the a critical component of any good GenAI application: The prompts.
An AI Prompt is any form of text, question, information, or coding that communicates to AI what response you’re looking for. Adjusting how you phrase your prompt, AI could provide varying responses.
Prompts are natural language instructions that are given to large language models (LLMs) before the actual query or task. They are used to guide the LLMs to produce better or more specific responses. For example, a prompt could be “Write a summary of the following article in three sentences.” followed by the article text. Or “Explain in detail the steps to add a guest to the waitlist”. Great examples in learning and engagement come from Josh Cavalier’s 150+ Prompts for Education.
LLM prompts are part of a broader technique called prompt engineering, which is the process of designing and selecting effective prompts for LLMs. Prompt engineering can improve the performance of LLMs on various natural language processing (NLP) tasks, such as question answering, text generation, classification, etc. Prompt engineering can also help steer LLMs toward truthfulness and informativeness and away from nonsensical or unfaithful results to the underlying content. Also known as hallucinations.
For those who use any LLM such as Google Bard, OpenAI ChatGPT or others, prompts written out can greatly influence the level of sophistication of answers, length of the answer and even what language will result. Additionally, in enterprise systems that maintain the user’s state like ChatGPT, context of the previous questions and answers can be maintained for at least 3,000 characters (or 4k tokens – tokens include letters/ characters and spaces). This means over several resulting queries the generative AI system will know the context of your questions. Answers to your previous questions are prompted behind the scenes to the generative AI. You can ask a question, receive an answer and then ask, “Can you give a an example?” If you haven’t exceeded the 4k token limit it will remember what you’re referring to from the previous question. LLM prompts are also related to other techniques such as fine-tuning and minimizing hallucinations by explicitly commanding the LLM to only give specific results that exist in a given content set. Minimizing LLM hallucinations however is a huge topic which we will dive into in future posts.
A great recent example of a an extremely effective prompt comes from a Canadian student who goes by JushBJJ on Github created a ChatGPT prompt that turns it into a fully fledged tutor. Though written more like a program in JSON, it delivers personalized learning experiences to anyone wishing to study any subject from Grade 1 to Ph.D level complexity. Mr. Ranedeer: Your personalized AI Tutor allows users to adjust the depth of knowledge, learning style, communication type, tone, and reasoning framework of the AI tutor. The prompt also provides optional tools that can create flexible environments, personality, and more for learning. What’s astounding is it’s designed to work well on OpenAI’s ChatGPT-4 or on the free ChatGPT 3.5 without any additional work. (JushBJJ mentions it may not perform as well on ChatGPT 3.5). To try it out, directly invoke it from a link to ChatGPT-4 or on ChatGPT 3.5 by copying and pasting from the link below.
See Mr. Ranedeer’s personalized AI tutor pre prompt written out.
OpenAI has recently announced expansion of its context limit to 16k of tokens (~ 12k characters w/o spaces) which will offer space for more robust pre-prompts while also expanding its support of function calling within pre-prompts. This means OpenAI can take a written statement, convert into a programming languages such as JSON to invoke API calls to databases or other applications to do things like “Update my vacation to begin next week”. Google Bard has also announced similar function calling but with select partners. Going forward, we’ll see organizations using these and other LLMs as an action platform performing multi-step logic to have a more natural interface between all their existing systems and their people.
Feathercap is your team answer platform that combines generative AI, cognitive search, context tracking, and authoring to automatically generate and tailor the right conversations for everyone as an instantly deployable subscription. The result? Maximized engagement, learning and expertise for your team.
Current learning, customer support and search approaches are costly and not effective because they rely on manually created content, hardcoded workflows and static search results that are not conversational. Our customers tell us Feathercap saves them time, money and energy managing their flow of knowledge.
For more: