2026-02-02 Ralph’s Technical Trawl – February 2026

The Pitfalls of everyday AI 

As a follow-up to my January column, I’ll share three anecdotes from my experience interacting with AI on highly technical subjects and how it can get confused and get some things downright wrong. Again, I will use “AI” as a shorthand for Large Language Models (LLMs), such as ChatGPT. 

  1. I was interested in how fast a rock falling from a high rooftop is going just when it lands.  AI at first assumed a free fall in a vacuum and produced a well-presented answer. I followed up and asked it to factor in the opposing force of air friction, but it calculated a higher velocity at impact when it included that factor.  I told it that the rock should be at a lower velocity at impact when considering air resistance.  It then did a particularly good job of correcting and presented an impeccable answer, which I was able to verify with my trusty university physics textbook. 
  2. On another occasion, I asked AI to write a Windows program, or an “app” as they are known today, to produce Morse Code from a text file on the hard drive.  As some of you may know, writing even a simple app in the Windows environment can be a daunting task, even for those comfortable programming in the C++  language.  After several iterations, the source code had compiler errors because of the assumptions AI made about Unicode character set, or it missed a curly bracket at the end of a lengthy decision block, or it assumed PI was a built-in constant. After I finished debugging the code and confirming everything worked as intended, I noticed AI had thoughtfully added header texts for each function and placed comments throughout the program. Those additions made it much easier to follow how the code worked. 
  3. Finally, I asked AI the other day to help with a squelch circuit addition to my Realistic DX-394 receiver.  The back and forth was going remarkably well until it decided that a diode won’t conduct current when its anode is at a higher potential than its cathode.  After I pointed out its mistake, it explained its error with all the evasiveness of a seasoned politician at a press conference. 

Quite often, AI responds as a highly intelligent and articulate assistant, who occasionally appears slightly inebriated. Today, AI certainly isn’t perfect and its synthesis of all the data available to it does produce excellent scholarly answers, but it doesn’t “understand” the world the way humans do.  Sometimes those patterns and logical connections derived from “big data” can lead it astray and then AI can lead you down a garden path to an elegantly presented and very plausible wrong answer. 

It seems the most effective use of AI isn’t blind trust but rather a collaboration coupled with a healthy dose of skepticism.  AI works best when we remain curious, challenge its output, and treat it as a partner, not an oracle. 


Last Updated on 2026-02-15 by Joannadanna