Holy AI, Batman! Large language models like Claude, ChatGPT, and Perplexity are becoming increasingly popular in business settings. These powerful AI systems can generate human-like text and engage in natural conversations. However, getting the most out of them requires understanding how to properly frame prompts, utilize different capabilities, and govern usage responsibly. Fun stuff, right?
In this comprehensive guide, we’ll cover key strategies and best practices for successfully leveraging large language models in your business.
Prompting Strategies for Better Outputs
Prompt engineering is a thing. For how long, no one knows. But right now, how you phrase prompts and questions to large language models greatly impacts the quality of the responses. Here are some effective prompting techniques:
- Ask clear, concise questions. The AI will generate better content when given a specific question to answer, rather than a vague request. Frame the prompt as a complete sentence or paragraph with sufficient context. Talk to it like you would to a person.
- Set the desired tone and style. Specify if you want casual language or a formal business tone. Ask for conversational responses or bullet points. This helps tailor outputs for business uses. Claude is pretty good at this out of the box, but ChatGPT may need some coaching, ideally in your Custom Instructions.
- Request summaries or explanations. If you provide lengthy input information, ask the AI to summarize key points or explain complex details more simply.
- Seek creative perspectives. Prompt for thought-provoking ideas, suggestions, or fresh angles by framing the question more openly.
- Verify factual accuracy. Fact check any statistical claims or factual statements made by the AI before presenting them externally as true. The models can make mistakes. Sometimes they hallucinate. Most of them are not connected directly to the internet, so they can give you out of date info, but perhaps only 6-12 months old.
Examples of Effective Prompting
- “Can you summarize the key points from this five page market research report in three concise bullet points?”
- “Please write a short LinkedIn post in an enthusiastic tone announcing our new product launch.”
- “What are some creative ways we could improve our customer referral program?”
Modes and Capabilities for Tailored Responses
Large language models have different modes and capabilities you can toggle to shape responses for business needs:
- Switch between modes. Use commands to make the AI act as a subject matter expert, creative brainstorming partner, data analyst, or virtual assistant.
- Leverage strengths. Use the model for content writing, data processing, answering customer questions, explaining concepts or summarizing documents.
- Mitigate limitations. Avoid directly automating regulated tasks like legal work. Fact check outputs. Provide oversight.
- Employ tools like context and memory. Maintain conversational context, create searchable databases, or build persistent memory to improve capabilities over time.
Example Modes and Uses
- Creative Brainstorming Mode to generate product naming ideas
- Marketing Assistant Mode to craft social media posts
- Customer Service Mode to answer support questions
- Data Analyst Mode to interpret sales figures
- Subject Matter Expert Mode to explain industry concepts
You need to give them detail- for example, on Claude, “go into creative brainstorming mode but do not give output unprompted. keep your output short. analyze previous output and ask me questions if you need clarification” is better than “go into creative brainstorming mode” unless you want something like the following unpromoted output:
Let’s engage in some imaginative thought! Here are a few ideas to get those creative juices flowing:
- Invent a new holiday that celebrates something unusual. What would it be called and how would people celebrate it? Get silly and absurd with your ideas!
- Imagine you wake up one day with a superpower. What would it be and how would you use it? Come up with a superhero backstory.
- Combine two random animals to make a new mythical creature. Describe its appearance, abilities, and habitat. Give your creature a name.
- You’re a mad scientist who accidentally brings a household object to life. What object is it and what’s its personality like? How does it wreak havoc?
- Describe a dystopian future society centered around something unexpected like pineapple pizza or socks. What are the laws, customs, fashion, etc?
- Pick a genre of art like painting, music or dance and come up with a new style that mixes it with something totally different. What’s your new artistic movement called?
- Imagine you find a magic portal that leads to a new world. Describe the creatures, environment, and civilizations that exist there. Come up with potential adventures.
- Design a treehouse for a fairy or gnome. What features and rooms would it have? furniture? Draw it or describe the details.
Let your imagination run wild! Don’t filter yourself, even if your ideas seem silly at first. You never know what creative concepts might emerge. Feel free to build off any of these ideas too. The key is tapping into that uninhibited creative part of your mind.
As you can see, LLMs can get away from you without specific guidance, not just on content but also on output format and length.
Obtaining High-Quality Outputs
Getting the best content from large language models involves iteratively shaping responses:
- Provide positive or negative feedback. Give honest praise or critiques to guide output quality. But keep in mind with prompting, LLMs do better when you tell them what you want, rather than what you don’t want. The latter is like telling someone, “Don’t think about the Statue of Liberty!” Now the LLM is thinking about the statue and will talk about it.
- Refine prompts for better answers. If a response is off-base, rephrase the prompt with more context. The more it knows, the better it does.
- Specify desired formats. Ask for emails, reports, bullet lists, tweets, code – whatever output type you need.
- Guide the AI to improve over time. Treat it like a virtual employee you’re training through ongoing pos/neg feedback. Use and refine Custom Instructions on ChatGPT. Start a new chat everytime you refine them.
Example Iterative Refinement Process
Here’s how it might go as you work on improving the prompt and the output.
- Prompt: Write a blog post about our new accounting software.
- AI generates off-topic post about accounting principles.
- User provides negative feedback explaining it should focus on the software features and benefits.
- User rephrases prompt adding context about the software and desired angles.
- AI generates improved post aligned to prompt.
- User provides positive feedback on clear software explanations.
- AI saves feedback and learns for next time, at least within that chat context window.
Governing Responsible AI Usage
To maintain trust and accountability when leveraging large language models in business, focus on ethical governance:
- Check for biases. Review some outputs for potentially biased language that could cause harm. I had a particularly weird set of images that ChatGPT created using DALL-E 3. I left the prompt vague, creating some impractical toy ideas for fun, but for some reason, it included outdated, racist portrayals of people. LLMs don’t exactly know what they’re doing, so they need our guidance.
- Don’t automate regulated tasks. Don’t directly rely on AI for legal, medical, engineering work that requires human credentialing. AI could help in these cases, but will need a watchful eye.
- Audit AI content before publishing. Review any externally facing text, social media posts, chatbot responses generated by the AI relevant laws and brand guidelines.
- Maintain human oversight. Keep humans in the loop directing the AI’s work and validating its outputs, rather than fully automating processes.
- Document AI usage. Keep track of where, when and how AI is deployed in the organization for accountability.
Governance Best Practices
- Establish an approval workflow for publishing AI-generated content
- Develop an AI code of ethics for your business
- Add visibility markers indicating text created by AI
- Perform algorithmic audits to detect biases or errors
- Implement confidentiality safeguards if the AI handles personal data
Key Takeaways for Leveraging Large Language Models
Here are some recap highlights on effectively using this powerful technology:
- Strategically frame prompts to elicit useful responses tailored to business needs.
- Take advantage of different modes and capabilities like summarization, data analysis, content creation, etc.
- Iteratively refine outputs through feedback and clear prompting.
- Audit for accuracy and ethical usage, maintaining human oversight.
The Future of AI in Business is Here
Large language models present game-changing opportunities for businesses to automate rote tasks, generate original content, analyze data, improve customer service and more. Following the strategies outlined in this guide will enable you to productively incorporate these AIs into your workflows. With responsible governance, they can take productivity, efficiency and innovation to new heights. The future is here – efficiently leverage it with these best practices for prompting, guiding and overseeing large language models in business.
One of the most important features needed to have more natural conversations with large language models is conversational context. Here’s an overview of what conversational context is, why it matters, and how to make the most of it:
What is Conversational Context?
Conversational context refers to an AI’s ability to remember previous parts of a conversation and use that information to inform its responses. This prevents the AI from treating each new prompt as a standalone question without any relation to what came before it.
With conversational context, the AI can keep track of “who’s who”, “what’s what”, the topic thread, concepts mentioned, and more. This allows dialogues to feel more coherent, relevant and human.
Why Conversational Context Matters
Lack of conversational context leads to fragmented, repetitive conversations. Without it, you have to re-explain information and re-state context frequently.
Conversational context enables:
- Back-and-forth dialogues instead of isolated Q&A
- Building on previous points without repetition
- Referencing things mentioned earlier in the conversation
- More logical, coherent, natural conversations
- Discussing complex issues over multiple questions
Best Practices for Leveraging Conversational Context
- Maintain a consistent conversation. Don’t abruptly jump between different topics.
- Summarize periodically. Occasionally have the AI summarize the conversation so far.
- Clarify misunderstandings. If the AI seems confused, you can re-state or re-phrase parts of the context.
- Ask follow-up questions. Dig deeper on points made earlier in the conversation.
- Be consistent in how you reference things. Use the same names when referring to people, products, companies etc.
Key Takeaways on Conversational Context
- It enables smoother, more logical conversations.
- Prevent abrupt topic changes and be consistent with references.
- Summarize and clarify when needed.
- Ask follow-up questions to build on context.
Conversational context is a game-changing capability for large language models. Taking steps to maximize it will lead to much more useful AI dialogues.
Memory and Knowledge Retention
In addition to conversational context, large language models also benefit greatly from expanded memory and knowledge retention capabilities:
Why Memory Matters
Having a strong memory enables the AI to:
- Look up information provided in previous conversations instead of forgetting it.
- Learn over time as its knowledge base grows through interactions.
- Become more helpful by retrieving relevant information.
- Carry learned information across conversations with different users.
Techniques to Improve Retention
Some ways to improve an AI’s memory and knowledge retention include:
- Knowledge bases – upload documents, data, manuals for the AI to refer to.
- Memory graphs – create visual linkages between connected concepts.
- Personal databases – build profiles of people, places, or products the AI can lookup.
- Versioning – store different generations of the AI’s knowledge.
- Lifelong learning – enable the AI to continuously expand its knowledge from the web and conversations.
Best Practices for Human Users
As a human user interacting with the AI, you can also help improve its memory:
- Provide missing information – if the AI seems unaware of something you mentioned before, restate it.
- Quiz the AI – test its recall of your previous conversations periodically.
- Correct mistakes – fix any incorrect information the AI remembers.
- Tag key details – highlight names, dates, or facts for the AI to log.
- Summarize conversations – ask the AI to summarize key points discussed periodically.
Why Memory Matters for Businesses
Expanding memory and knowledge retention for AI systems enables:
- More personalized, contextually-relevant conversations.
- Employees getting quicker, higher quality support.
- More efficient customer service and sales interactions.
- Scaling expertise as more knowledge is retained over time.
- Reduced need for re-training as learned information is retained.
So in summary, improving memory and knowledge retention for large language models allows businesses to maximize their value – making them a wise investment for long-term success.
Hypothetical Example Company
Here is a hypothetical example of how a company could effectively use large language models like Claude while following the best practices outlined in this guide:
Claude is used by the marketing team at Anthropic Software, a tech startup, to generate content.
Instead of prompting vaguely, they frame requests clearly like:
“Can you please write a 300 word Facebook post announcing our upcoming webinar on AI ethics for a general audience?”
They specify the desired tone, length, audience and purpose upfront.
Modes and Capabilities
The customer support team switches Claude into “Customer Service Agent” mode to help respond to user questions with empathy and product knowledge.
The data analytics team toggles Claude into “Data Science” mode to analyze usage trends and summarize insights.
When Claude generates product descriptions, the product team provides feedback to refine the text over multiple iterations.
They praise clear, persuasive language and correct factual errors or repetitive phrasing.
Before publishing a Claude-generated press release, the PR team reviews it carefully.
They verify facts, watch for biased language, check compliance with regulations and company policies.
The CFO requires all teams document when, where, and how Claude is used for transparency.
- Marketing improves engagement on posts and ads.
- Support deftly answers customer questions 24/7.
- Data analysis is more efficient and clear.
- Product descriptions convince more users.
- Risks are mitigated through governance processes.
So following the best practices guides this hypothetical company to safely maximize value from large language models like Claude and ChatGPT across departments.
Hope that helps, and let me know what issues you’re running into!