Investors as well as those interested in the impact of economics on the country and world — a large statistical universe — often look for data. The current artificial intelligence hype and inclusion of such capabilities into software, computer operating systems, and web browsers would seem welcome effort-saving additions. However, there is a problem: wrong information from AI.

Here are three simple examples of how relying on the accuracy of what is handed to you can lead you to incorporate errors, whether tiny or significant, into your work, deliberations, and decisions. But first, some background.

Not All AI Is The Same

The term artificial intelligence or AI is thrown about without discernment, which can lead to making significant mistakes.

Working examples of AI have existed since the early 1950s, according to the Encyclopedia Britannica. The category incorporates many types of technology. All have their uses and have been incorporated into software and even hardware for decades. The spellcheck that is handy in word processing is a form of AI. But even such a tool, so long developed and refined, can make mistakes. Machine learning can identify patterns, but not always correctly.

No tool is perfect. Wield a hammer in framing a wall and you might leave dents in wood or welts on your hand if your aim lacks accuracy. Could you use the claw end to break through wood? Absolutely, and yet you’re far better off reaching for a saw.

The current AI fad is the generative form, like ChatGPT. Some will claim they are a prototype of general intelligence. They aren’t. Rather, these programs employ laudably sophisticated statistical capabilities exercised on huge amounts of data — writing, visual imagery, video, audio, or computer code — and look for patterns. What word or graphic element typically follows a specific other in a given context?

Results can seem magical and even like a solution to the so-called Turing test, a thought experiment by the mathematical Alan Turing. If a third-party judge following a blind conversation between a computer and a human can’t tell the difference between the two, the device passes the test.

Semblance, however, is not identity. A convincing trompe l’oeil painting remains a flat service, not a three-dimensional object. Generative AI frequently creates hallucinations: fabricated information that might seem correct but isn’t. That can go as far as citing research papers or legal citations that don’t exist.

There are other reasons, like data sources or timing, why generative AI can provide bad information.

Examples Of Wrong Invesment Information from AI

This topic started for me when double-checking the current federal funds rate range on March 18, 2025, which is the benchmark interest rate the Fed sets. Using Bing on Microsoft’s Edge browser, there was a list of potential sources. At the top of the first page was an AI summary.

It said that the current rate was 4.50% to 4.75%, then said the target range, which should be the same, was 5.25% to 5.50%. And finally, it added that the Fed has kept the rate stead between 4.25% to 4.50% since January 2024.

The current rate as of writing is 4.25% to 4.50% according to the Federal Reserve, but that was set in December 2024 and left untouched since then.

The AI display was seriously wrong on a basic economic and financial figure.

I wondered what else might be awry. Next, I looked for the yield on the 10-year Treasury Note, another standard reference point for finance. It is a typical proxy for the risk-free interest rate portion of other longer-term interest rates.

The Bing AI answer, which should have been the closing value on March 17, was given as 4.28%. However, the March 17 value according to the Department of the Treasury, was 4.31%.

The difference isn’t large, only .03 percentage points, but it is significant in bond trading, especially if part of a possible upward trend. The question is what source the software used for its information. The 4.28 figure was last seen on March 11 as an end-of-day rate.

The third example was the Tesla stock price on the morning of March 18. This was more subtle. What was listed as the Copilot answer showed an opening value of $224.25 but $224.91 according to Yahoo Finance, although I ran both checks simultaneously. They did both agree on the closing price of $238.01 on March 17. This last example might have been a case of timing and the two systems picking up data somehow at slightly different times. But that wouldn’t explain the first two.

There may be types of AI that could work well in investing. Perhaps some of the technologies might help with managing portfolios, but it’s clear that automatically trusting data from software could result in wrong information from AI that could be highly risky.

Read the full article here

Share.
Leave A Reply

2025 © Prices.com LLC. All Rights Reserved.
Exit mobile version