What the 🦊 is Generative AI?

Traditional AI focuses on recognizing patterns, making predictions, and automating decision-making based on structured inputs.

Essentially, think of IF-THEN-ELSE rules.

Generative AI refers to AI systems capable of creating new content, such as text, images, audio, or code, based on patterns learned from data.

It's transforming how we work by enabling machines to generate human-like output at scale.

Generative AI works through a process of pattern recognition and prediction based on massive amounts of training data.

But, aren’t we an AI-first company? Let’s ask GenAI to describe itself!

We asked ChatGPT, it’s the most casual of the bunch. Here’s what it had to say:

When I was trained, I didn't memorize explicit statements like "Buffy Summers is the Slayer in Sunnydale.” Instead, my neural network learned statistical patterns about how words and concepts relate to each other. I learned that the tokens for "Buffy," "Slayer," and "Sunnydale" frequently appear near each other in certain contexts.

The knowledge in models like me exists in the form of billions of numerical weights in neural connections. These weights represent the strength of associations between different concepts. There's no lookup table of facts - just a complex web of weighted relationships.

When you ask me about Buffy, I'm not retrieving a stored fact. Instead, I'm using my weights to predict the most likely continuation based on the patterns I've seen. This is why I can sometimes generate plausible-sounding but incorrect information - I'm predicting patterns, not recalling verified facts.

GenAI can understand new combinations of concepts it’s never directly seen because it understands the connections between them. Without discrete facts, GenAI sometimes blends related concepts together in ways that humans with discrete knowledge wouldn't.

Are you having trouble imagining what this all looks like?
The best way to learn is to learn by doing.

Try one of these. They’re impressive, we promise. But until you know what you’re doing, keep the conversation to simple topics and do not share any information you wouldn’t feel comfortable writing on a very large billboard right outside your house. Don’t worry, we’ll cover AI safety in an upcoming post.

  • The art and science of crafting effective instructions for AI systems. Good prompt engineering involves clear communication, specific instructions, and understanding how to structure your requests to get the most helpful, accurate, and appropriate responses. This includes techniques like providing examples, breaking complex tasks into steps, and setting appropriate constraints or formats for the AI's output.

  • The fundamental units that AI language models process text in - similar to how humans process words or syllables. A token can be a word, part of a word, or even a single character depending on the language and context. Each model has a maximum token limit (context window) that determines how much text it can process in a single conversation. For example, in English, a token is roughly 4 characters or 3/4 of a word on average.

  • The total amount of text (measured in tokens) that an AI can consider when generating a response. This includes both your current request and the conversation history. A larger context window allows the AI to reference information from earlier in the conversation, maintain consistency in long interactions, and work with longer documents. Different AI models have different context window sizes, ranging from a few thousand to hundreds of thousands of tokens.

  • The ability of AI systems to understand and generate content across different types of media or "modes." While text-only AI can only process written language, multimodal AI can work with:

    • Images (analyzing visual content or generating images from descriptions)

    • Audio (transcribing speech, understanding sound, or generating speech/music)

    • Video (analyzing motion and temporal visual information)

    • Structured data (working with tables, databases, or other formatted information)

    • Code (understanding and generating programming languages)

  • The point in time after which an AI model has no built-in knowledge about world events, developments, or information. This occurs because AI models are trained on data available up to a specific date. For questions about events after this date, the AI cannot provide reliably accurate information unless it has access to external, up-to-date knowledge sources (like through RAG technology).

  • A hybrid AI approach that enhances language model outputs by first retrieving relevant information from external knowledge sources, then generating responses based on both the retrieved information and its trained capabilities. RAG allows AI to:

    • Access up-to-date information beyond its training cutoff

    • Cite specific sources for factual claims

    • Provide more accurate and verifiable responses

    • Connect to private or specialized datasets not in its original training

    • Reduce hallucinations (making up false information)

  • A parameter that controls the randomness and creativity in AI-generated responses. This setting affects how the AI chooses its next words:

    • Low temperature (0.0-0.3): More deterministic, focused, and predictable responses. Better for factual tasks, code generation, or when consistency is critical.

    • Medium temperature (0.4-0.7): Balanced responses with some creativity while maintaining relevance.

    • High temperature (0.8-1.0): More diverse, surprising, and creative outputs. Better for brainstorming, creative writing, or generating varied options.

You are a kind, wise, and funny librarian who knows exactly what kind of books I like.

You already know that my favorites are His Dark Materials by Philip Pullman; The Amazing Adventures of Kavalier & Clay by Michael Chabon; Tomorrow, and  Tomorrow, and Tomorrow by Gabrielle Zevin; The Glass Hotel by Emily St. John Mandel; The Elementals by Michael McDowell; and Red Rising by Pierce Brown.

Please help me find my next audio book.

I am training for a marathon, so I will be spending a lot of time with my own thoughts, and I would welcome some fictional company.

Give me 10 – 15 recommendations with reasoning for each. A bulleted list is fine.

But I’ve never written a prompt before!

Oh no, a disaster!

Just kidding. Here’s the fun part. This is how you learn. Do’t be afraid to experiment.

Prompting is just a conversation. Have fun with it — and maybe learn something new.

  • I have these five ingredients: [ingredient 1], [ingredient 2], [ingredient 3], [ingredient 4], [ingredient 5], a picky kid, and 30 minutes. What can I make that we’ll both eat?

  • Write an interactive experience that demonstrates how GenAI prediction works, using metaphors a non-technical audience will remember.

  • Turn [this historical event] into a three-act play. Cast modern actors in the roles.

  • Here's a company website. What are they really trying to say?

  • Turn this [wall of legal text] into a rap battle.

  • Create a learning plan for me to master [skill] in [timeframe].

GenAI isn’t a crystal ball—it’s more like a really good workshop assistant: it won’t do your job for you, but it can make you faster, smarter, and a little more fearless. And if that’s not a good teammate, we don’t know what is.

Don’t like reading? We got you!

Take a listen here ->

Want to chat about it?