

Recently, we’ve fielded a spate of questions from people who have used an AI chatbot to help with a technical issue and then asked us to confirm whether the information was accurate, helpful, or even safe.
First off, we’re not offended. If you can work through simple problems on your own with the help of an AI chatbot, that lets us focus on helping you with the bigger picture and issues that can be solved only by someone with awareness of your physical setup or broad knowledge of your workflow.
However, the mere fact that we’re getting these questions shows that people aren’t entirely comfortable with the AI answers, which is a good thing. Because chatbots work by giving you the most statistically likely words from their training models or extracted from search results, they can sometimes return incorrect information that could be harmful or even damaging. And, of course, they’ll do so in a breezy, confident tone that doesn’t suggest any concern.
For instance, we’ve seen chatbots confidently suggest deleting files or resetting permissions from the command line (be very afraid of anything that starts with sudo), disabling System Integrity Protection (almost never necessary), turning off Gatekeeper to install unsigned apps, resetting iCloud Keychain syncing, and more.
Here’s how to think about those responses. First, if you sense that following the chatbot’s instructions might cause problems, ask it to explain potential concerns and how to address them. Also, if you don’t understand what it’s telling you to do, say so and ask it to restate the instructions in simpler terms for someone less experienced. After pushing the chatbot for more details, use your critical thinking skills to ask yourself if its instructions could lead to irreversible changes or data loss.
If you still have any hesitation after going through that process, then it’s time to contact us. It’s helpful to share your chatbot conversation with us so we can assess what it suggested and explain why there was no need to worry or why you were justified in checking before taking action that you might regret.
Although this may seem like a modern problem, we’ve seen many similar situations over the years, where people get fired up about an article they read in an airline seatback magazine or hear something from their brilliant nephew who’s getting a degree in computers from a very good college. Although there’s no intent to deceive from any of these sources (chatbots don’t have intent at all, much less any to deceive), technical advice only makes sense in the context of your goals and resources.
In fact, having conversations about AI suggestions can be helpful because they help you develop better technical judgment. We can help you understand the principles behind different technical solutions, highlight what details matter when evaluating recommendations, and build your confidence in knowing when to trust (or distrust) technical advice from any source. Think of it as collaborative problem-solving that makes you better equipped to handle future technical challenges, whether you tackle them independently or with our professional help.
For the record, chatbots can help you understand basic settings, find features in common apps, and interpret standard error messages. But whenever a suggested solution involves system-level changes or seems risky, that’s when you should contact us.
(Featured image by iStock.com/Valerii Apetroaiei)