Artificial Intelligence (AI) and Machine Learning (ML)

AI Thinks Like Us


AI Thinks Like Us—Flaws, Biases, and All, Says New Study

Academic intelligence shows great competence yet still possesses many imperfections. OpenAI’s ChatGPT displays human-style decision capabilities as demonstrated by research showing it reproduces human mental errors and cognitive biases during judgment tasks.
AI shows declining problem-solving competence during subjective situations which need human-based intuition and assessment skills. The research discovered that ChatGPT demonstrates overconfidence while also avoiding danger and becoming trapped by classic logical fallacies which include the gambler’s fallacy during its operations. The system does not display two particular errors that humans typically make: base-rate neglect and sunk cost bias.

Human Bias in a Machine

The analysis named “A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do?” appeared in the INFORMS’ Manufacturing & Service Operations Management journal to test the AI through 18 classic cognitive bias tests. Modern artificial intelligence technology exhibited human-like behavioral flaws in about 47% of its responses.
These included:

  • Overconfidence described its responses as more precise than the actual truth demonstrated.
  • Ambiguity Aversion causes the system to stay away from uncertain choices although they may yield superior outcomes.
  • Clients make assumptions due to stereotypes when they answer questions just like in the well-known “Linda problem.”
  • CA uses Confirmation Bias as a habit to select information that matches its pre-existing beliefs.
  • Risk-averse behavior involved picking safer options carrying lower rewards instead of more rewarding yet statistically superior ones.

GPT-4 managed to perform better in logical tasks than its predecessors but repeated biased responses appeared more frequently.

What This Means for Real-World Decisions

Artificial Intelligence systems presently operate in recruitment and financial approval protocols and medical systems and various other organizational systems. This research identifies a critical issue because AI systems risk doubling human flawed choices through their capability to copy faulty human processes.

Lead researcher Yang Chen from Western University states that AI acquires its faulty reasoning capabilities from human datasets. Systems using AI for judgment-intensive assignments tend to display similar errors that humans make when performing those tasks.

Research scientists emphasize that proper oversight should exist due to these findings. AI systems should not receive unconditional trust as perfect decision systems. The system needs vigilance equivalent to standard human worker supervision when dealing with critical decisions.

Oversight and Ethics Are Essential

According to Samuel Kirshner from UNSW Business School AI will likely make current decision making issues more severe than solving them without proper supervision. AI systems according to McMaster University researcher Meena Andiappan should follow ethical frameworks for operation in the same way any organization leader functions with authority.

AI functions as a team member according to her explanation. How AI operates requires human supervision to check it meets required guidelines.

What’s the Path Forward?

The study’s authors recommend:

  • Regular audits of AI decisions.
  • Continuous model evaluation and refinement.
  • Performance evaluation will determine the specific AI version which suits best for each particular task.

According to Tracy Jenkin from Queen’s University two main types of AI models exist; those with judgment patterns that follow human thinking and those with logical patterns that mimic computer thinking. Picking the appropriate model for its designated purpose requires regular assessment of advancing technology.

Final Thoughts

AI does not function as the perfectly rational computer system that most people expect it to be. The technology possesses higher capabilities than humans in some specific functions yet encounters similar processing errors that humans face. The proper guidance of AI development requires our attention to both its beneficial features and its harmful humanlike weaknesses if we truly wish for improved decision making.

Leave a Reply

Your email address will not be published. Required fields are marked *