IMG-LOGO

News Feed - 2023-07-18 01:07:47

Tristan Greene16 hours agoElon Musk’s new AI startup is as ambitious as it is doomedThe public perception surrounding AI’s abilities is no match for the laws of physics.1332 Total views15 Total sharesListen to article 0:00OpinionJoin us on social networksAlmost nothing is known about Elon Musk’s latest endeavor, an artificial intelligence startup named xAI. But “almost nothing” is still something. And we can glean a lot from what little we do know.


As Cointelegraph recently reported, Musk announced xAI on July 12 in a statement comprising three sentences,“Today we announce the formation of xAI. The goal of xAI is to understand the true nature of the universe. You can meet the team and ask us questions during a Twitter Spaces chat on Friday, July 14th.”


Based on this information we can deduce that xAI exists, it is doomed, and more information about how it will fail will be revealed on Twitter. The reason it is doomed is simple: The laws of physics prevent it.


According to a report from Reuters, Musk’s motivation for xAI is based on a desire to develop safe artificial intelligence (AI). In a recent Twitter Spaces event, he said:“If it tried to understand the true nature of the universe, that’s actually the best thing that I can come up with from an AI safety standpoint.”


This is a laudable goal, but any attempts to understand the “true” nature of the universe are doomed because there isn’t a ground-truth knowledge center somewhere where we can verify our theories against.


It’s not that humans aren’t smart enough to understand the nature of the universe — the problem is that the universe is really, really big, and we’re stuck inside of it.


Heisenberg’s Uncertainty Principle tells us unequivocally that certain aspects of reality cannot be confirmed simultaneously through observation or measurement. This is the reason why we can’t just measure the distance between Earth and Uranus, wait a year, measure it again, and determine the exact rate of the universe’s expansion.


The scientific method requires observation, and, as the anthropic principle teaches us, all observers are limited.


In the case of the observable universe, we’re further limited by the nature of physics. The universe is expanding at such a rapid pace that it prohibits us from measuring anything beyond a certain point, no matter what tools we use.


The universe’s expansion doesn’t just make it bigger. It gives it a distinct, definable “cosmological horizon” that the laws of physics prevent us from measuring beyond. If we were to send a probe out at the maximum allowable speed under the laws of physics, the speed of light, then every bit of the universe that’s beyond the exact point the probe could travel in X amount of time is forever inaccessible.


This means even a hypothetical superintelligence capable of processing all of the data that’s ever been generated still could not determine any ground truths about the universe.


A slight twist on Schrödinger’s Cat thought experiment, called Wigner’s Friend, demonstrates why this is the case. In the original, Erwin Schrödinger imagined a cat trapped in a box with a vial of radioactive liquid and a hammer that would strike the vial, and thus kill the cat, upon the completion of a quantum process.


One of the fundamental differences between quantum and classical processes is that quantum processes can be affected by observation. In quantum mechanics, this means that the hypothetical cat is both alive and dead until someone observes it.


Physicist Eugene Wigner was reportedly “irked” by this and decided to throw his own spin on the thought experiment to challenge Schrödinger’s assertions. His version added two scientists, one inside the lab who opens the box to observe whether the cat was alive or dead and another outside who opens the door to the lab to see whether the scientist inside knows whether the cat is alive or dead.


What xAI appears to be proposing is a reversal of Wigner’s thought experiment. They seemingly want to remove the cat from the box and replace it with a general pre-trained transformer (GPT) AI system — i.e., a chatbot like ChatGPT, Bard or Claude 2.


Related: Elon Musk to launch truth-seeking artificial intelligence platform TruthGPT


Instead of asking an observer to determine whether the AI is alive or dead, their plan is to ask the AI to discern ground truths about the lab outside of the box, the world outside of the lab and the universe beyond the cosmological horizon without making any observations.


The reality of what xAI seems to be proposing would mean the development of an oracle: a machine capable of knowing things it doesn’t have evidence for. 


There is no scientific basis for the idea of an oracle; its origins are rooted in mythology and religion. Scientifically speaking, the best we can hope for is that xAI develops a machine capable of parsing all of the data that’s ever been generated. 


There’s no conceivable reason to believe this would turn the machine into an oracle, but maybe it’ll allow it to help scientists see something they missed and lead to further insight. Perhaps the secret to cold fusion is lying around in a Reddit data set somewhere that nobody’s managed to use to train a GPT model yet.


But, unless the AI system can defy the laws of physics, any answers it gives us regarding the “true” nature of the universe will have to be taken on faith until confirmed by observations made from beyond the box — and the cosmological horizon.


For these reasons, and many others related to how GPT systems actually interpret queries, there’s no scientifically viable method by which xAI, or any other AI company, can develop a binary machine running classical algorithms capable of observing the truth about our quantum universe.Tristan Greene is a deputy news editor for Cointelegraph. Aside from writing and researching, he enjoys gaming with his wife and studying military history.


This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.# Business# Analysis# Technology# Tech# AI# Elon Musk# Opinion# Machine Learning# ChatGPTAdd reactionAdd reactionRelated NewsCan artificial intelligence prevent the next financial crisis?5 peer-to-peer (P2P) lending platforms for borrowers and lenders7 payment gateways for fast online transactionsGirlfriends, murdered kids, assassin androids — Is AI cursed?How to use ChatGPT like a proHow to land a high-paying job as an AI prompt engineer