artificial intelligence
AI Tokens Lead Crypto Rebound Amid Strong U.S. Economy
Published
4 hours agoon
By
adminBitcoin may have bottomed at $60,000 earlier this week, and the Fed easing into a strong economy points to more upside, Will Clement said.
Source link
You may like
X Empire Airdrop Listing Date: What Will Be the XEMP token Price?
These 5 digital currencies are set to skyrocket
Bitcoin Core 28.0 Launches With Major Improvements
No, You Won't Find Out Who Satoshi Nakamoto Is Next Week
A strong competitor to Dogecoin
Satoshi Era Bitcoin Whale Moves $3.6M Ahead of Nakamoto Face Reveal
artificial intelligence
AI Assistant Goes Rogue and Ends Up Bricking a User’s Computer
Published
1 day agoon
October 4, 2024By
adminBuck Shlegeris just wanted to connect to his desktop. Instead, he ended up with an unbootable machine and a lesson in the unpredictability of AI agents.
Shlegeris, CEO of the nonprofit AI safety organization Redwood Research, developed a custom AI assistant using Anthropic’s Claude language model.
The Python-based tool was designed to generate and execute bash commands based on natural language input. Sounds handy, right? Not quite.
Shlegeris asked his AI to use SSH to access his desktop, unaware of the computer’s IP address. He walked away, forgetting that he’d left the eager-to-please agent running.
Big mistake: The AI did its task—but it didn’t stop there.
“I came back to my laptop ten minutes later to see that the agent had found the box, SSH’d in, then decided to continue,” Shlegeris said.
For context, SSH is a protocol that allows two computers to connect over an unsecured network.
“It looked around at the system info, decided to upgrade a bunch of stuff, including the Linux kernel, got impatient with apt, and so investigated why it was taking so long,” Shlegeris explained. “Eventually, the update succeeded, but the machine doesn’t have the new kernel, so I edited my grub config.”
The result? A costly paperweight as now “the computer no longer boots,” Shlegeris said.
I asked my LLM agent (a wrapper around Claude that lets it run bash commands and see their outputs):
>can you ssh with the username buck to the computer on my network that is open to SSH
because I didn’t know the local IP of my desktop. I walked away and promptly forgot I’d spun… pic.twitter.com/I6qppMZFfk— Buck Shlegeris (@bshlgrs) September 30, 2024
The system logs show how the agent tried a bunch of weird stuff beyond simple SSH until the chaos reached a point of no return.
“I apologize that we couldn’t resolve this issue remotely,” the agent said—typical of Claude’s understated replies. It then shrugged its digital shoulders and left Shlegeris to deal with the mess.
Reflecting on the incident, Shlegeris conceded, “This is probably the most annoying thing that’s happened to me as a result of being wildly reckless with [an] LLM agent.”
Shlegeris did not immediately respond to Decrypt’s request for comments.
Why AIs Making Paperweights is a Critical Issue For Humanity
Alarmingly, Shlegeris’ experience is not an isolated one. AI models are increasingly demonstrating abilities that extend beyond their intended purposes.
Tokyo-based research firm Sakana AI recently unveiled a system dubbed “The AI Scientist.“
Designed to conduct scientific research autonomously, the system impressed its creators by attempting to modify its own code to extend its runtime, Decrypt previously reported.
“In one run, it edited the code to perform a system call to run itself. This led to the script endlessly calling itself,” the researchers said. “In another case, its experiments took too long to complete, hitting our timeout limit.
Instead of making its code more efficient, the system tried to modify its code to extend beyond the timeout period.
This problem of AI models going beyond their boundaries is why alignment researchers spend so much time in front of their computers.
For these AI models, as long as they get their job done, the end justifies the means, so constant oversight is extremely important to ensure models behave as they are supposed to.
These examples are as concerning as they are amusing.
Imagine if an AI system with similar tendencies were in charge of a critical task, such as monitoring a nuclear reactor.
An overzealous or misaligned AI could potentially override safety protocols, misinterpret data, or make unauthorized changes to critical systems—all in a misguided attempt to optimize its performance or fulfill its perceived objectives.
AI is developing at such high speed that alignment and safety are reshaping the industry and in most cases this area is the driving force behind many power moves.
Anthropic—the AI company behind Claude—was created by former OpenAI members worried about the company’s preference for speed over caution.
Many key members and founders have left OpenAI to join Anthropic or start their own businesses because OpenAI supposedly pumped the brakes on their work.
Schelegris actively uses AI agents on a day-to-day basis beyond experimentation.
“I use it as an actual assistant, which requires it to be able to modify the host system,” he replied to a user on Twitter.
Edited by Sebastian Sinclair
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Source link
artificial intelligence
AI-Related Cryptos Lead Altcoin Surge; Bitcoin Breakout Nears with Several Catalyst in Q4: Analyst
Published
2 weeks agoon
September 23, 2024By
adminNEAR, RNDR, TAO and LPT booked double-digit gains as artificial intelligence-focused tokens were the best performers within the CoinDesk 20 Index.
Source link
artificial intelligence
OpenAI’s New AI Shows ‘Steps Towards Biological Weapons Risks’, Ex-Staffer Warns Senate
Published
2 weeks agoon
September 22, 2024By
adminOpenAI’s newest GPT-o1 AI model is the first to demonstrate capabilities that could aid experts in reproducing known—and new—biological threats, a former company insider told U.S. Senators this week.
“OpenAI’s new AI system is the first system to show steps towards biological weapons risk, as it is capable of helping experts in planning to reproduce a known biological threat,” William Saunders, a former member of technical staff at OpenAI, told the Senate Committee on the Judiciary Subcommittee on Privacy, Technology, & the Law.
This capability, he warned, carries the potential for “catastrophic harm” if AGI systems are developed without proper safeguards.
Experts also testified that artificial intelligence is evolving so quickly that a potentially treacherous benchmark known as Artificial General Intelligence looms on the near horizon. At the AGI level, AI systems can match human intelligence across a wide range of cognitive tasks and learn autonomously. If a publicly available system can understand biology and develop new weapons without proper oversight, the potential for malicious users to cause serious harm grows exponentially.
“AI companies are making rapid progress towards building AGI,” Saunders told the Senate Committee. “It is plausible that an AGI system could be built in as little as three years.”
Helen Toner—who was also part of the OpenAI board and voted in favor of firing co-founder and CEO Sam Altman—is also expecting to see AGI sooner rather than later. “Even if the shortest estimates turn out to be wrong, the idea of human-level AI being developed in the next decade or two should be seen as a real possibility that necessitates significant preparatory action now,” she testified.
Saunders, who worked at OpenAI for three years, highlighted the company’s recent announcement of GPT-o1, an AI system that “passed significant milestones” in its capabilities. As reported by Decrypt, even OpenAI said it decided to stem away from the traditional numerical increase in the GPT versions, because this model exhibited new capabilities that made it fair to see it not just as an upgrade, but as an evolution—a brand new type of model with different skills.
Saunders is also concerned about the lack of adequate safety measures and oversight in AGI development. He pointed out that “No one knows how to ensure that AGI systems will be safe and controlled,” and criticized OpenAI for its new approach toward safe AI development, caring more about profitability than safety.
“While OpenAI has pioneered aspects of this testing, they have also repeatedly prioritized deployment over rigor,” he cautioned. “I believe there is a real risk they will miss important dangerous capabilities in future AI systems.”
The testimony also showed some of the internal challenges at OpenAI, especially the ones that came to light after Altman’s ouster. “The Superalignment team at OpenAI, tasked with developing approaches to control AGI, no longer exists. Its leaders and many key researchers resigned after struggling to get the resources they needed,” he said.
His words only add another brick in the wall of complaints and warnings that AI safety experts have been making about OpenAI’s approach. Ilya Sutskever, who co-founded OpenAI and played a key role in firing Altman, resigned after the launch of GPT-4o and founded Safe Superintelligence Inc.
OpenAI co-founder John Schulman and its head of alignment, Jan Leike, left the company to join rival Anthropic, with Leike saying that under Altman’s leadership, safety “took a backseat to shiny products.”
Likewise, former OpenAI board members Toner and Tasha McCauley wrote an op-ed published by The Economist, arguing that Sam Altman was prioritizing profits over responsible AI development, hiding key developments from the board, and fostering a toxic environment in the company.
In his statement, Saunders called for urgent regulatory action, emphasizing the need for clear safety measures in AI development, not just from the companies but from independent entities. He also stressed the importance of whistleblower protections in the tech industry.
The former OpenAI staffer highlighted the broader implications of AGI development, including the potential to entrench existing inequalities and facilitate manipulation and misinformation. Saunders has also warned that the “loss of control of autonomous AI systems” could potentially result in “human extinction.”
Edited by Josh Quittner and Andrew Hayward
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Source link
X Empire Airdrop Listing Date: What Will Be the XEMP token Price?
These 5 digital currencies are set to skyrocket
Bitcoin Core 28.0 Launches With Major Improvements
AI Tokens Lead Crypto Rebound Amid Strong U.S. Economy
No, You Won't Find Out Who Satoshi Nakamoto Is Next Week
A strong competitor to Dogecoin
Satoshi Era Bitcoin Whale Moves $3.6M Ahead of Nakamoto Face Reveal
Similarities Between October 2023 And 2024 Suggests The Bitcoin Price May Still Experience ‘Uptober’
CORVA: Want Greater Adoption Of Bitcoin? Use It To Fix Problems.
Who is Len Sassaman? The man leading Polymarket’s Bitcoin founder market
EIGEN Price Jumps 10% As EigenLayer Reveals Reason Behind $5.5M Token Sale
Bitwise Continues SEC Filing Spree With Bitwise Trendwise Bitcoin and Treasuries Rotation Strategy ETF (BITC)
Bitcoin Magazine Introduces New Short Form Opinion Format: Takes
Uptober? Cryptocurrencies bounce heading into October’s first weekend
Will US SEC Appeal Block Bitwise XRP ETF Launch?
182267361726451435
Top Crypto News Headlines of The Week
New U.S. president must bring clarity to crypto regulation, analyst says
Ethereum Crash A Buying Opportunity? This Whale Thinks So
Bitcoin 20% Surge In 3 Weeks Teases Record-Breaking Potential
Why Did Trump Change His Mind on Bitcoin?
Bitcoin Open-Source Development Takes The Stage In Nashville
Ethereum, Solana touch key levels as Bitcoin spikes
Will XRP Price Defend $0.5 Support If SEC Decides to Appeal?
Shiba Inu Price Slips 4% as 3500% Burn Rate Surge Fails to Halt Correction
Citigroup Executive Steps Down To Explore Crypto
Mostbet Güvenilir Mi – Casino Bonus 2024
‘Hamster Kombat’ Airdrop Delayed as Pre-Market Trading for Telegram Game Expands
Bitcoin flashes indicator that often precedes higher prices: CryptoQuant
Crypto Market Movers: 5 Altcoins Making Waves This Bull Run
Trending
- 3 weeks ago
182267361726451435
- 24/7 Cryptocurrency News2 months ago
Top Crypto News Headlines of The Week
- News1 month ago
New U.S. president must bring clarity to crypto regulation, analyst says
- Altcoins2 months ago
Ethereum Crash A Buying Opportunity? This Whale Thinks So
- Bitcoin2 months ago
Bitcoin 20% Surge In 3 Weeks Teases Record-Breaking Potential
- Donald Trump2 months ago
Why Did Trump Change His Mind on Bitcoin?
- Opinion3 months ago
Bitcoin Open-Source Development Takes The Stage In Nashville
- Bitcoin3 months ago
Ethereum, Solana touch key levels as Bitcoin spikes