Welcome to the NavList Message Boards.

NavList:

A Community Devoted to the Preservation and Practice of Celestial Navigation and Other Methods of Traditional Wayfinding

Compose Your Message

Message:αβγ
Message:abc
Add Images & Files
    Name or NavList Code:
    Email:
       
    Reply
    Some advice on chatbots
    From: Frank Reed
    Date: 2025 Oct 13, 13:30 -0700

    I think it's worth adding some advice on A.I. chatbots for NavList members...

    First: they're getting better, but some are better than others. I suggest ChatGPT and Claude AI. Second: you need to understand what they're good at, and also you need to understand some of the things that they are, so far, not good at and also some things that they are, so far, completely incapable of doing...

    I think I'll start with that last one. The models that underlie these remarkable chatbots are static. You can't teach them anything. You may find in a conversation that ChatGPT or one of the others will say something like "Oh you are so right. Thank you for correcting me." These are additions created by layers that "wrap" the primary language model. Are they fake politeness? All politeness is social lubricant. Chatbot politeness is performative --it's a show. It's designed to preserve the illusion of teachability and warmth without shattering the human-machine rapport, but it's also a bit of a lie.

    There. That's the bad news out of the way. This will undoubtedly improve in a few years. The models will be able to adjust their understanding of topics at some point, but for now the training process is extremely expensive in terms of time and energy. Notice though that chatbots are already able to add to their knowledge by web search. They can talk about current events and developments in science, for example, but they will interpret this new knowledge within the rather strict bounds of the original training. This capability of adding new knowledge should become more sophisticated within a year or a few years.

    Chatbots remain a bit miraculous. They start out as statistical databases. Their designers feed them mountains of text, then they sit back and wait. A few weeks later, the human managers sit down at the keyboard when the training is done, and it's alive! Or so it seems... the machine is fluent in dozens of languages, can talk intelligently about endless topics, and gets your jokes. Not only does it get your jokes, but it can explain why humans laugh at jokes in biological and evolutionary terms... it can talk to you about comedy in Classical Greece or Ancient China, in the original dialects or in translation into a language of your choice, and it can generate a library of new jokes that can be genuinely funny. How does this happen? Well, that's the thing... It's not entirely clear, even to the experts who build and manage these A.I. systems. It's "funny"...

    When you "talk" with an A.I., you're engaging with a clone, or maybe you can think of it as a polyp on a giant hydra... Every time you start a conversation, it's a new clone, and it learns about you by looking at your prior conversations, but --the way things are built now-- only if you ask it to. It's like you're entering a little cubicle at a library with "post-it" notes on the walls from your prior conversations. The librarian enters and checks a few highlights that have been noted in earlier sessions, but this clone is not the intelligence that you met before... That's just an analogy, of course, but the cubicle concept also helps to understand some of the quirks that you may see in online interactions with an A.I. chatbot.

    Chatbots "don't do math", at least not literally, until they are hooked up to a separate bit of code that keeps them in line. Working on the analogy, the math logic is in another cubicle. It's not built in quite the same way, but it has basic mathematical logic, things like "7+7=14" instead of "7+7=77". The latter works just fine from the perspective of conversation, but it isn't math. That's where the separate math modules come into play, and these are some of the things that have been dramatically improved in the past couple of years. Nonetheless, be alert that the math analysis is separate, and if it's detached from your favorite chatbot for "maintenance", then watch out! Only three months ago, during one strange week, the Google Gemini chatbot happily told me that the diameter of a circle was one-half the circumference. It got better, but episodes like this can happen unannounced. Be careful.

    In addition to amazing fluency in dozens and dozens of human languages, chatbots can code. They can create fairly good computer "software" in nearly any language. In my own experience, their skill level here is still beginner, but they have great depth of familiarity with languages and coding environments as well as great breadth across the range of computing languages and systems including some that are quite obscure. All those software developers worried about losing their jobs are right if they are entry-level, but there's a whole new, and eventually very large, job category spinning off from the interaction between traditional software developers and A.I. coding... Chatbots can invent disastrously complicated code, or they can get bogged down in debugging cycles that lead nowhere. Smart humans complement the skills of the chatbots.

    Chatbots also have their eyes closed (I wouldn't say they're blind, because this is a different sort of limitation). When you upload an image, for example, a separate module analyzes it and passes a structured text-based analysis to the chatbot. They can "see" what they have received indirectly. More problematically, they cannot necessarily see what they output. The chatbot sends you text with presentation flourishes like "italics" and "bullet points" but these are constructed for output elsewhere, typically through an app or a web interface. Similarly if you ask for a PDF or some simple text container file output, the chatbot does not directly "see" that --it's another module doing that work, and the chatbot you're talking to needs to be told if the file isn't right.

    And the simplest rule of interaction with a chatbot is just what it says on a 'shampoo' bottle: "lather - rinse - repeat" or more formally "iterate". If you don't get the information you expect from a chatbot, then enhance your "prompt" with more instructions and repeat. Iteration, repeating with more detailed instructions, can quickly converge on the target you're hoping for. Similarly, when you're looking for factual information, it can be helpful to ask a question two or three different ways. Look for contradictions. If you point them out, the chatbot has sufficient "debate" skills to recognize that factal conflicts can be resolved by deeper analysis. This often works. On the other hand, don't be afraid to bail out. These things are not authorities. They chat about authoritative information. It's up to us to decide whether the original information is genuinely authoritative and often the contradictions in chatbot replies reflect real inconsistencies in the sources.

    Chatbot training is also not smart by itself. It is biased by content volume, not quality of correctness unless those properties are revealed by the content volume. Necessarily this implies that you will get answers derived from eras of controversy or periods where new systems were being developed. And there is an aspect of "mob rule" in this. It's not 'might makes right' but text volume makes right! Thus even when a chatbot is producing detailed, persuasive prose, be aware that it is only reformulating and repackaging the ideas that have produced the most text within the range of documents available in its training data. Because of this bias, chatbot "answers" tend to be conservative and conformist. Be careful.

    Finally, focus on the sweet spot. Chatbots are not oracles -- they are conversation engines. Treat them as conversational adversaries or even mirrors of your own ideas and thoughts. Ask questions and let the conversation develop. You may be surprised by the "intelligence" that these language models contain. But at the same time, their real advantage is to reflect back and get you thinking via their responses.

    Frank Reed

       
    Reply
    Browse Files

    Drop Files

    NavList

    What is NavList?

    Join / Get NavList ID Code

    Name:
    (please, no nicknames or handles)
    Email:
    Do you want to receive all group messages by email?
    Yes No

    A NavList ID Code guarantees your identity in NavList posts and allows faster posting of messages.

    Retrieve a NavList ID Code

    Enter the email address associated with your NavList messages. Your NavList code will be emailed to you immediately.
    Email:

    Email Settings

    NavList ID Code:

    Custom Index

    Subject:
    Author:
    Start date: (yyyymm dd)
    End date: (yyyymm dd)

    Visit this site
    Visit this site
    Visit this site
    Visit this site
    Visit this site
    Visit this site