AI Confidence: Are Machines Misleading Us with Overconfidence?

I’ve been thinking a lot about how we think, especially when it comes to artificial intelligence (AI) and its attempts to mimic our thought processes. As humans, we spend a lot of our day just thinking and answering questions. This is pretty similar to what Generative AI (GenAI) models are trying to do today. Interestingly, the way we look for answers isn’t necessarily better or worse than how AI does it. The big difference lies in our “cognitive validation process.”

How AI Answers Questions

When AI gets a question, it goes through a pretty straightforward process:

  1. Understand the Question: The AI first makes sure it understands the question correctly and fixes any grammar issues if needed.
  2. Search for Information: It then searches through its vast database for relevant answers.
  3. Formulate a Response: Finally, it crafts a response using sophisticated language skills to ensure the answer is clear and well-phrased.

How Humans Process Questions: An Illustrative Approach

Imagine you’re in a team meeting and someone asks a complex question. Here’s how you might handle it:

  1. Validate the Question: You first figure out if the question is serious or just a joke. For example, if someone asks, “Can we build a spaceship by next week?” you might laugh and realize it’s not a serious question.

  2. Re-frame the Question: You then rephrase the question in your mind to make sure you understand it. If asked, “How can we improve our project management process?” you might think, “What specific aspects of project management are we looking to improve?”

  3. Search Your Repository for an Answer: You mentally sift through your experiences and knowledge. Maybe you recall a recent project where you successfully used a new project management tool.

  4. Construct an Answer: You start to form a response based on your knowledge. You might say, “We could consider adopting a new project management tool that offers better task tracking and team collaboration features.”

  5. Test the Answer for Validity: Before speaking, you quickly check if your answer addresses the core of the question. You ask yourself, “Does this tool really improve our process, or is it just a minor enhancement?”

  6. If Invalid, Start Over: If you realize your answer isn’t fully addressing the question, you might ask for more details, like “Can you clarify which part of the process needs improvement?”

  7. Test the Answer for Confidence: You gauge how confident you are in your response. You think, “I’ve used this tool before, and it was effective, so I feel confident recommending it.”

  8. Construct the Final Answer: You then present your well-considered response, “Based on my experience, adopting this new tool could significantly enhance our project management by improving task tracking and team collaboration.”

This approach shows the nuanced and iterative nature of human thinking. We constantly validate, reframe, test, and refine our thoughts to make sure they’re accurate and reliable.

Distinction in Cognitive Validation Process

The key difference lies in the cognitive validation process—the steps we take to make sure our responses are valid and confident. When answering questions, humans instinctively apply several filters:

  • Is the question valid?
  • Is the answer valid?
  • How confident are we in the answer?
  • What is the emotional tone of the question?
  • Is the answer relevant to the question?
  • Is the answer clear and easily understood?
  • Do we have the experience to support our answer?

These validations and confidence levels often show up in our language. For example, when we’re uncertain, we might use phrases like “I believe,” “It seems,” or “Based on my understanding.” When we’re confident, we use more definitive language like “I know,” “This will,” or “We should.” This linguistic nuance helps convey our thought process and the degree of certainty we attach to our answers.

The Problematic AI Approach

In contrast, AI often generates an answer and presents it without explicitly using language that conveys confidence or validity. While many AI systems can provide confidence scores or probabilities alongside their answers, the answers themselves usually lack the nuanced phrasing that humans use to indicate their level of certainty. An AI might provide a response like, “The new project management tool offers better task tracking and team collaboration features,” without phrases like “I believe” or “Based on the data.” This can make it seem like the AI is always certain of its responses, which isn’t necessarily true.

The Impact on Human Perception

This approach can be problematic because it doesn’t match human expectations of how confidence and validity should be communicated. We rely on language cues to gauge the reliability of information. The absence of these cues in AI-generated answers can lead to misunderstandings and a lack of trust in the AI’s outputs. This overconfidence in AI results can be especially dangerous in critical decision-making scenarios where nuanced understanding and expression of confidence are crucial.

The Need for AI Filters

So, why can’t AI incorporate additional filters in its answers to maintain our confidence in its function? Adding these filters could involve extra checks and balances. This would make AI-generated responses more reliable, aligning them more closely with human cognitive validation processes and ensuring that AI remains a trustworthy tool in our quest for knowledge. By integrating linguistic cues that indicate confidence and validity, AI could better mimic human communication patterns, improving user trust and the overall effectiveness of AI interactions.

Enhancing AI Responses with Follow-up Prompts

Until AI can communicate in a more nuanced way that incorporates validity and confidence, we can use follow-up prompts to help incorporate this nuance. For example, after getting an initial answer from an AI, you could ask, “How confident are you in this answer?” This prompt encourages the AI to provide a confidence score or a probability that indicates the likelihood of correctness.

Then, you can refine the response further by asking, “Based on your confidence in the answer, can you nuance the language used to represent your confidence level?” This additional prompt can help the AI adjust its phrasing to better align with human communication patterns, such as using terms like “I believe,” “It seems,” or “Based on the data” when the confidence level is lower, and more definitive language when the confidence level is higher.

By actively engaging with AI through these follow-up prompts, we can enhance the clarity and reliability of AI-generated responses, ensuring they better meet our expectations for nuanced and trustworthy communication. This iterative interaction helps bridge the gap between human and AI thinking, fostering a more effective and collaborative relationship with AI technology.

How do you think AI can better mimic human thought processes? Share your thoughts and let’s start a conversation!

Finance – Outside forces undermining the US Dollar

Several external forces are actively trying to undermine the US dollar (USD) as the worldwide trade currency. These efforts are driven by a combination of economic, geopolitical, and strategic motivations.

Motivations Behind Undermining the USD

  • Sanctions and Geopolitical Strategy: Countries facing US sanctions seek to reduce their vulnerability by decreasing their reliance on the USD.
  • Enhancing Global Influence: Promoting alternative currencies can enhance a country’s or region’s influence in the global economic system.
  • Economic Independence: Reducing dependency on the USD allows countries to have greater control over their monetary and fiscal policies.
  • Diversification and Risk Management: Diversifying away from the USD can help mitigate risks associated with fluctuations in the value of the USD and US economic policies.

Continue reading “Finance – Outside forces undermining the US Dollar”

Finance – How might the BRICS initiative impact the US Dollar?

The BRICS initiative, especially with its potential expansion and discussions about creating a common currency, could impact the US dollar (USD) in several ways. Here are some possible impacts:

1. Reduced Demand for USD:

  • Trade Settlements: If BRICS countries start using their own currencies or a new BRICS currency for trade among themselves and with other countries, the global demand for the USD for international trade could decrease.
  • Diversification of Reserves: Central banks might diversify their foreign exchange reserves away from the USD in favor of a BRICS currency or other currencies within the bloc.

Continue reading “Finance – How might the BRICS initiative impact the US Dollar?”