artificial intelligence
AI-Related Cryptos Lead Altcoin Surge; Bitcoin Breakout Nears with Several Catalyst in Q4: Analyst
Published
3 hours agoon
By
adminNEAR, RNDR, TAO and LPT booked double-digit gains as artificial intelligence-focused tokens were the best performers within the CoinDesk 20 Index.
Source link
You may like
Покердом — покер-рум на реальные деньги
Meme coin Bonk plans to launch an ETP — but is the SEC on board?
Exchange Appeals SEC’s Denial Of Rulemaking Petition
Bitcoin Forks: Pathways to Innovation or Disruptive Forces?
21.co integrates Chainlink’s proof of reserve on Solana and Ethereum
Analyst Predicts 8,500% Rally For Cardano To Reach $31 As Indicators Turn Bullish
artificial intelligence
OpenAI’s New AI Shows ‘Steps Towards Biological Weapons Risks’, Ex-Staffer Warns Senate
Published
2 days agoon
September 22, 2024By
adminOpenAI’s newest GPT-o1 AI model is the first to demonstrate capabilities that could aid experts in reproducing known—and new—biological threats, a former company insider told U.S. Senators this week.
“OpenAI’s new AI system is the first system to show steps towards biological weapons risk, as it is capable of helping experts in planning to reproduce a known biological threat,” William Saunders, a former member of technical staff at OpenAI, told the Senate Committee on the Judiciary Subcommittee on Privacy, Technology, & the Law.
This capability, he warned, carries the potential for “catastrophic harm” if AGI systems are developed without proper safeguards.
Experts also testified that artificial intelligence is evolving so quickly that a potentially treacherous benchmark known as Artificial General Intelligence looms on the near horizon. At the AGI level, AI systems can match human intelligence across a wide range of cognitive tasks and learn autonomously. If a publicly available system can understand biology and develop new weapons without proper oversight, the potential for malicious users to cause serious harm grows exponentially.
“AI companies are making rapid progress towards building AGI,” Saunders told the Senate Committee. “It is plausible that an AGI system could be built in as little as three years.”
Helen Toner—who was also part of the OpenAI board and voted in favor of firing co-founder and CEO Sam Altman—is also expecting to see AGI sooner rather than later. “Even if the shortest estimates turn out to be wrong, the idea of human-level AI being developed in the next decade or two should be seen as a real possibility that necessitates significant preparatory action now,” she testified.
Saunders, who worked at OpenAI for three years, highlighted the company’s recent announcement of GPT-o1, an AI system that “passed significant milestones” in its capabilities. As reported by Decrypt, even OpenAI said it decided to stem away from the traditional numerical increase in the GPT versions, because this model exhibited new capabilities that made it fair to see it not just as an upgrade, but as an evolution—a brand new type of model with different skills.
Saunders is also concerned about the lack of adequate safety measures and oversight in AGI development. He pointed out that “No one knows how to ensure that AGI systems will be safe and controlled,” and criticized OpenAI for its new approach toward safe AI development, caring more about profitability than safety.
“While OpenAI has pioneered aspects of this testing, they have also repeatedly prioritized deployment over rigor,” he cautioned. “I believe there is a real risk they will miss important dangerous capabilities in future AI systems.”
The testimony also showed some of the internal challenges at OpenAI, especially the ones that came to light after Altman’s ouster. “The Superalignment team at OpenAI, tasked with developing approaches to control AGI, no longer exists. Its leaders and many key researchers resigned after struggling to get the resources they needed,” he said.
His words only add another brick in the wall of complaints and warnings that AI safety experts have been making about OpenAI’s approach. Ilya Sutskever, who co-founded OpenAI and played a key role in firing Altman, resigned after the launch of GPT-4o and founded Safe Superintelligence Inc.
OpenAI co-founder John Schulman and its head of alignment, Jan Leike, left the company to join rival Anthropic, with Leike saying that under Altman’s leadership, safety “took a backseat to shiny products.”
Likewise, former OpenAI board members Toner and Tasha McCauley wrote an op-ed published by The Economist, arguing that Sam Altman was prioritizing profits over responsible AI development, hiding key developments from the board, and fostering a toxic environment in the company.
In his statement, Saunders called for urgent regulatory action, emphasizing the need for clear safety measures in AI development, not just from the companies but from independent entities. He also stressed the importance of whistleblower protections in the tech industry.
The former OpenAI staffer highlighted the broader implications of AGI development, including the potential to entrench existing inequalities and facilitate manipulation and misinformation. Saunders has also warned that the “loss of control of autonomous AI systems” could potentially result in “human extinction.”
Edited by Josh Quittner and Andrew Hayward
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Source link
artificial intelligence
Pride, Prejudice and Pixels: Meet an AI Elizabeth Bennet at Jane Austen’s House
Published
1 week agoon
September 14, 2024By
adminIt is a truth universally acknowledged that a single man in possession of a good fortune must be in want of an AI.
At least, that’s what Jane Austen’s House thinks. The museum in Hampshire, England—where the author lived and worked—has teamed up with AI firm StarPal and the University for the Creative Arts (UCA) to create “Lizzy,” an AI avatar based on Austen’s Pride and Prejudice heroine Elizabeth Bennet.
“It is so exciting to finally be able to lift Elizabeth Bennet off the page and to be able to have real-time conversations with her,” said Sophy Smith, director of games and creative technology at UCA, in a press release.
“This technology has the potential to transform experiences within both the museum and heritage, as well the education sector,” Smith added, adding that the technology will enable museum visitors to “engage directly” with Austen’s character.
Creating Lizzy
The first step in creating an AI avatar was to pick a suitable fictional character.
“There are lots of AI avatars, but these were people that existed in the past,” Lauren Newport-Quinn, project manager for UCA’s Games and Innovation Nexus, told Decrypt. “We thought it’d be nice to do something with a fictional character, where no one’s been able to pick their brains before.”
The team debated which character would be best suited to the project, which needed “someone who has a lot to say—has some strong opinions—who’s very well rounded and could give good advice,” Newport-Quinn said. “That’s when we landed on Elizabeth Bennett.”
To create Lizzy’s knowledge bank, StarPal and UCA turned to a selection of novels, manuscripts, and period-accurate information curated with the help of Jane Austen’s House researchers.
“It was basically anything that was instructed by the museum director as personal knowledge that she should have,” Newport-Quinn said. As well as Pride and Prejudice itself, Lizzy draws on “scholarly studies on Pride and Prejudice, the works of Jane Austen as a whole, and studies on her life.” This was supplemented with demographic and lifestyle information from the Regency era.
The avatar’s dress and hair. meanwhile, were designed and created by students from the Games Arts and Digital Fashion courses at UCA, drawing on fashion templates from the Regency period.
“It was exciting bringing to life the cut of dress, accessories, and embroideries—all inspired by historical drawings and descriptions,” said UCA MA Digital Fashion student Anya Haber, who created Lizzy’s dress in 3D. She added that”it showed how useful technology can be in a historical setting, letting fans engage with fictional characters.”
Conversations with AI avatars could be “an easier way to learn for certain learning styles,” Newport-Quinn explained. “If you’re not a visual passive learner, where reading something you don’t digest that information as well—if you have a conversation with someone, that might be able to enhance your level of knowledge on that topic.”
Smith affirmed that AI avatars could be used as educational tools, where “instead of only having text-based revision guides, students can now learn about literature by speaking directly to the characters.”
AI avatars
AI avatars are increasingly being used to bring fictional characters, dead celebrities, and even digital twins to life. Earlier this year, AI firm Soul Machines created an avatar of Marilyn Monroe, while London-based developer Synthesia has created “Personal Avatars” that enable users to create a digital video duplicate of themselves that can be used on social media, in marketing campaigns and training videos.
British actor and musician FKA Twigs revealed in a U.S. Senate hearing earlier this year that she had created just such a digital twin, explaining that it is “not only trained in my personality, but […] can also use my exact tone of voice to speak many languages,” and that it could help her reach a more global fanbase.
But the question of who controls AI-generated likenesses has raised concerns. In October, a bipartisan group of U.S. Senators introduced the “No Fakes Act,” which aims to outlaw the creation of AI-generated likenesses without consent.
Edited by Ryan Ozawa.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Source link
artificial intelligence
Prime-Time AI: Sam Altman Gives Oprah an OpenAI Primer
Published
1 week agoon
September 14, 2024By
adminIn an in-depth conversation with TV and film icon Oprah Winfrey on Thursday, OpenAI CEO Sam Altman shared his thoughts on artificial intelligence and what the future holds for the relationship between humans and computers.
In one of several interviews for Winfrey’s prime-time special “AI and the Future of Us,” Altman shed light on the transformative potential of this technology, as well as the critical challenges that developers and policymakers must address.
“Four years ago, most of the world, if they found out about AI, thought about self-driving cars or some other thing,” he told Winfrey. “It was only in 2022 when first-time people said, ‘Okay, this ChatGPT thing, this computer talks to me, now that’s new.’ And then since then, if you look at how much better it’s gotten, it’s been a pretty steep rate of improvement.”
Altman called AI the “next chapter of computing,” which allows computers to understand, predict, and interact with their human operators.
“We have figured out how to make computers smarter, to understand more, to be able to be more intuitive and more useful,” he said.
When asked to describe how ChatGPT works, Altman went back to basics, saying the core of ChatGPT’s capabilities lies in its ability to predict the next word in a sequence, a skill honed through being trained on large amounts of text data.
“The most basic level, we are showing the system 1,000 words in a sequence and asking it to predict what word comes next, and doing that again and again and again,” he explained, comparing it to when a smartphone attempts to predict the next work in a text message. “The system learns to predict, and then in there, it learns the underlying concepts.”
During the segment, Winfrey noted that a lack of trust led to a major shakeup at OpenAI in 2022. In November of that year, Altman was abruptly fired as CEO of OpenAI, with the board citing a lack of trust in Altman’s leadership—although he was reinstated a week later.
“So the bar on this is clearly extremely high—the best thing that we can do is to put this technology in the hands of people,” Altman said. “Talk about what it is capable of, what it’s not, what we think is going to come, what we think might come, and give our best advice about how society should decide to use [AI].”
“We think it’s important to not release something which we also might get wrong and build up that trust over time, but it is clear that this is going to be a very impactful technology, and I think a lot of scrutiny is thus super warranted,” he added.
One of the concerns raised during the interview was the need for diversity in the AI industry, with Winfrey pointing out that predominantly white males currently dominate the field.
“Obviously, we want everybody to see themselves in our products,” Altman said. “We also want the industry workforce to be much more diverse than it is, and there’s slower-than-we’d-like progress, but there is progress there,” he said, expressing a commitment to ensuring that the benefits of AI are accessible to all.
Altman also highlighted OpenAI’s collaboration with policymakers in developing safer artificial intelligence, saying that he speaks with members of the U.S. government—from the White House to Congress—multiple times a week.
Last month, OpenAI and Anthropic announced the establishment of a formal collaboration with the U.S. AI Safety Institute (AISI). In the agreement, the institute would have access to new models of ChatGPT and Claude from each company, respectively, prior to and following their public release.
Altman said collaboration between AI developers and policymakers is crucial, as well as safety testing of AI models.
“A partnership between the companies developing this technology and governance is really important; one of the first things to do, and this is now happening, is to get the governments to start figuring out how to do safety testing on these systems—like we do for aircraft or new medicines or things like that,” Altman said. “And then I think from there, if we can get good at that now, we’ll have an easier time figuring out exactly what the regulatory framework is later.”
When Winfrey told Altman that he’s been called the most powerful and perhaps most dangerous man on the planet, the CEO pushed back.
“I don’t feel like the most powerful person or anything even close to that,” he said. “I feel the opportunity—responsibility in a positive way—to get to nudge this in a direction that I think can be really good for people.”
“That is a serious, exciting, somewhat nerve-wracking thing, but it’s something that I feel very deeply about, and I realize I will never get to touch anything this important again,” Altman added.
Edited by Ryan Ozawa.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Source link
Покердом — покер-рум на реальные деньги
Meme coin Bonk plans to launch an ETP — but is the SEC on board?
AI-Related Cryptos Lead Altcoin Surge; Bitcoin Breakout Nears with Several Catalyst in Q4: Analyst
Exchange Appeals SEC’s Denial Of Rulemaking Petition
Bitcoin Forks: Pathways to Innovation or Disruptive Forces?
21.co integrates Chainlink’s proof of reserve on Solana and Ethereum
Analyst Predicts 8,500% Rally For Cardano To Reach $31 As Indicators Turn Bullish
US SEC Chair Gary Gensler Confirms Congress Testimony, Here’s What To Expect
Azərbaycanda Rəsmi Say
Neiro, Immutable X, Arkham pumps as Bitcoin stalls
Bitcoin (BTC) Will Thrive Regardless of US Election Outcome, Says VanEck’s Matthew Sigel
Chain Abstraction: Redefining Blockchain Innovation
How BitHide helps companies safely manage crypto
Will EOS Price Hit $1 Ahead of Spring 1.0 Hardfork?
SEC Greenlights Nasdaq Listing of Options for BlackRock’s Bitcoin ETF
Top Crypto News Headlines of The Week
Bitcoin 20% Surge In 3 Weeks Teases Record-Breaking Potential
Bitcoin Open-Source Development Takes The Stage In Nashville
Ethereum, Solana touch key levels as Bitcoin spikes
New U.S. president must bring clarity to crypto regulation, analyst says
Why Did Trump Change His Mind on Bitcoin?
Ethereum Crash A Buying Opportunity? This Whale Thinks So
Will XRP Price Defend $0.5 Support If SEC Decides to Appeal?
Shiba Inu Price Slips 4% as 3500% Burn Rate Surge Fails to Halt Correction
Citigroup Executive Steps Down To Explore Crypto
Bitcoin flashes indicator that often precedes higher prices: CryptoQuant
Mostbet Güvenilir Mi – Casino Bonus 2024
Crypto Market Movers: 5 Altcoins Making Waves This Bull Run
Dormant Bitcoin wallet from 2014 moves BTC worth $10.2m
Australia’s police joins global campaign to disrupt crypto scams
Trending
- 24/7 Cryptocurrency News1 month ago
Top Crypto News Headlines of The Week
- Bitcoin2 months ago
Bitcoin 20% Surge In 3 Weeks Teases Record-Breaking Potential
- Opinion2 months ago
Bitcoin Open-Source Development Takes The Stage In Nashville
- Bitcoin2 months ago
Ethereum, Solana touch key levels as Bitcoin spikes
- News3 weeks ago
New U.S. president must bring clarity to crypto regulation, analyst says
- Donald Trump2 months ago
Why Did Trump Change His Mind on Bitcoin?
- Altcoins2 months ago
Ethereum Crash A Buying Opportunity? This Whale Thinks So
- Price analysis1 month ago
Will XRP Price Defend $0.5 Support If SEC Decides to Appeal?