Quick Start Guide for Mnemonic's Digital Twin of the Customer.
This guide provides a collection of sample prompts for use in the Digital Twin to help you get started.
The Digital Twin leverages advanced AI algorithms to create realistic and dynamic simulations of real-world entities, enabling users to explore various scenarios and outcomes.
The sample prompts are designed to guide users in generating accurate and meaningful digital twins, offering insights into the potential applications of the technology in fields such as customer analyses, marketing, sales, and product development.
For you to understand where information comes from, the Digital Twin quotes the sources. It could be general world knowledge, industry knowledge, or specific knowledge based on your data.
World Knowledge: Water is wet
Industry Knowledge: Every American buys 100 gallons drinking water each year.
Specific Knowledge: 12oz cans of mango-flavored sparkling water were the top seller in November.
Knowing what you don't know is important. The Digital Twin is based on knowledge graphs built for your organization. This not only provides important context for knowledge but minimizes the chance for hallucinations we are used to from LLMs. Depending on how open ended your questions are, the Digital Twin will answer "I don't know", and this is a good thing. The Digital Twin will not make you go down a rabbit hole searching for information that is straight made up.
What to do when you get "I don't know" on crucial questions? More data. The Digital Twin bases its answers on data. If you get repeatedly "I don't know", the data is missing in the knowledge graph.
For example, if you get "I don't know" on behavioral questions regarding website visitors, it might be a good idea to connect your Google Analytics.
Temperature in a language model (LLM) is a setting that controls the randomness of its responses. It influences how creative or predictable the model's output is. Think of it like how much spice you want to add to the response.
Normal language models like ChatGPT predict the next token, the Digital Twin is completely grounded in data, the knowledge graph our AI builds for you. To get more creative, flamboyant answers, you have to instruct the Digital Twin to go off rails. With the temperature you tell the AI by how much.
Low Temperature (e.g., 0.1 or 0.2): When the temperature is low, the model's responses are more focused and deterministic. It tends to choose the most likely next word, resulting in more predictable and sensible outputs. This is useful when you want clear and straightforward answers.
High Temperature (e.g., 0.8 or 1.0): When the temperature is high, the model's responses become more diverse and creative. It has a higher chance of picking less likely next words, leading to more varied and imaginative outputs. This is useful when you want creative writing or brainstorming.
In essence, temperature adjusts the level of creativity in the model's responses. Lower temperatures make the model's responses more conservative and predictable, while higher temperatures make them more varied and creative.