Connect with us

artificial intelligence

How the US Military Says Its Billion Dollar AI Gamble Will Pay Off

Published

on



War is more profitable than peace, and AI developers are eager to capitalize by offering the U.S. Department of Defense various generative AI tools for the battlefields of the future.

The latest evidence of this trend came last week when Claude AI developer Anthropic announced that it was partnering with military contractor Palantir and Amazon Web Services (AWS) to provide U.S. intelligence and the Pentagon access to Claude 3 and 3.5.

Anthropic said Claude will give U.S. defense and intelligence agencies powerful tools for rapid data processing and analysis, allowing the military to perform faster operations.

Experts say these partnerships allow the Department of Defense to quickly adopt advanced AI technologies without needing to develop them internally.

“As with many other technologies, the commercial marketplace always moves faster and integrates more rapidly than the government can,” retired U.S. Navy Rear Admiral Chris Becker told Decrypt in an interview. “If you look at how SpaceX went from an idea to implementing a launch and recovery of a booster at sea, the government might still be considering initial design reviews in that same period.”

Becker, a former Commander of the Naval Information Warfare Systems Command, noted that integrating advanced technology initially designed for government and military purposes into public use is nothing new.

“The internet began as a defense research initiative before becoming available to the public, where it’s now a basic expectation,” Becker said.

Anthropic is only the latest AI developer to offer its technology to the U.S. government.

Following the Biden Administration’s memorandum in October on advancing U.S. leadership in AI, ChatGPT developer OpenAI expressed support for U.S. and allied efforts to develop AI aligned with “democratic values.” More recently, Meta also announced it would make its open-source Llama AI available to the Department of Defense and other U.S. agencies to support national security.

During Axios’ Future of Defense event in July, retired Army General Mark Milley noted advances in artificial intelligence and robotics will likely make AI-powered robots a larger part of future military operations.

“Ten to fifteen years from now, my guess is a third, maybe 25% to a third of the U.S. military will be robotic,” Milley said.

In anticipation of AI’s pivotal role in future conflicts, the DoD’s 2025 budget requests $143.2 billion for Research, Development, Test, and Evaluation, including $1.8 billion specifically allocated to AI and machine learning projects.

Protecting the U.S. and its allies is a priority. Still, Dr. Benjamin Harvey, CEO of AI Squared, noted that government partnerships also provide AI companies with stable revenue, early problem-solving, and a role in shaping future regulations.

“AI developers want to leverage federal government use cases as learning opportunities to understand real-world challenges unique to this sector,” Harvey told Decrypt. “This experience gives them an edge in anticipating issues that might emerge in the private sector over the next five to 10 years.

He continued: “It also positions them to proactively shape governance, compliance policies, and procedures, helping them stay ahead of the curve in policy development and regulatory alignment.”

Harvey, who previously served as chief of operations data science for the U.S. National Security Agency, also said another reason developers look to make deals with government entities is to establish themselves as essential to the government’s growing AI needs.

With billions of dollars earmarked for AI and machine learning, the Pentagon is investing heavily in advancing America’s military capabilities, aiming to use the rapid development of AI technologies to its advantage.

While the public may envision AI’s role in the military as involving autonomous, weaponized robots advancing across futuristic battlefields, experts say that the reality is far less dramatic and more focused on data.

“In the military context, we’re mostly seeing highly advanced autonomy and elements of classical machine learning, where machines aid in decision-making, but this does not typically involve decisions to release weapons,” Kratos Defense President of Unmanned Systems Division, Steve Finley, told Decrypt. “AI substantially accelerates data collection and analysis to form decisions and conclusions.”

Founded in 1994, San Diego-based Kratos Defense has partnered extensively with the U.S. military, particularly the Air Force and Marines, to develop advanced unmanned systems like the Valkyrie fighter jet. According to Finley, keeping humans in the decision-making loop is critical to preventing the feared “Terminator” scenario from taking place.

“If a weapon is involved or a maneuver risks human life, a human decision-maker is always in the loop,” Finley said. “There’s always a safeguard—a ‘stop’ or ‘hold’—for any weapon release or critical maneuver.”

Despite how far generative AI has come since the launch of ChatGPT, experts, including author and scientist Gary Marcus, say current limitations of AI models put the real effectiveness of the technology in doubt.

“Businesses have found that large language models are not particularly reliable,” Marcus told Decrypt. “They hallucinate, make boneheaded mistakes, and that limits their real applicability. You would not want something that hallucinates to be plotting your military strategy.”

Known for critiquing overhyped AI claims, Marcus is a cognitive scientist, AI researcher, and author of six books on artificial intelligence. In regards to the dreaded “Terminator” scenario, and echoing Kratos Defense’s executive, Marcus also emphasized that fully autonomous robots powered by AI would be a mistake.

“It would be stupid to hook them up for warfare without humans in the loop, especially considering their current clear lack of reliability,” Marcus said. “It concerns me that many people have been seduced by these kinds of AI systems and not come to grips with the reality of their reliability.”

As Marcus explained, many in the AI field hold the belief that simply feeding AI systems more data and computational power would continually enhance their capabilities—a notion he described as a “fantasy.”

“In the last weeks, there have been rumors from multiple companies that the so-called scaling laws have run out, and there’s a period of diminishing returns,” Marcus added. “So I don’t think the military should realistically expect that all these problems are going to be solved. These systems probably aren’t going to be reliable, and you don’t want to be using unreliable systems in war.”

Edited by Josh Quittner and Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

artificial intelligence

AI Startup Hugging Face is Building Small LMs for ‘Next Stage Robotics’

Published

on



AI startup Hugging Face envisions that small—not large—language models will be used for applications including “next stage robotics,” its Co-Founder and Chief Science Officer Thomas Wolf said.

“We want to deploy models in robots that are smarter, so we can start having robots that are not only on assembly lines, but also in the wild,” Wolf said while speaking at Web Summit in Lisbon today.  But that goal, he said, requires low latency. “You cannot wait two seconds so that your robots understand what’s happening, and the only way we can do that is through a small language model,” Wolf added.

Small language models “can do a lot of the tasks we thought only large models could do,” Wolf said, adding that they can also be deployed on-device. “If you think about this kind of game changer, you can have them running on your laptop,” he said. “You can have them running even on your smartphone in the future.”

Ultimately, he envisions small language models running “in almost every tool or appliance that we have, just like today, our fridge is connected to the internet.”

The firm released its SmolLM language model earlier this year. “We are not the only one,” said Wolf, adding that, “Almost every open source company has been releasing smaller and smaller models this year.”

He explained that, “For a lot of very interesting tasks that we need that we could automate with AI, we don’t need to have a model that can solve the Riemann conjecture or general relativity.” Instead, simple tasks such as data wrangling, image processing and speech can be performed using small language models, with corresponding benefits in speed.

The performance of Hugging Face’s LLaMA 1b model to 1 billion parameters this year is “equivalent, if not better than, the performance of a 10 billion parameters model of last year,” he said. “So you have a 10 times smaller model that can reach roughly similar performance.”

“A lot of the knowledge we discovered for our large language model can actually be translated to smaller models,” Wolf said. He explained that the firm trains them on “very specific data sets” that are “slightly simpler, with some form of adaptation that’s tailored for this model.”

Those adaptations include “very tiny, tiny neural nets that you put inside the small model,” he said. “And you have an even smaller model that you add into it and that specializes,” a process he likened to “putting a hat for a specific task that you’re gonna do. I put my cooking hat on, and I’m a cook.”

In the future, Wolf said, the AI space will split across two main trends.

“On the one hand, we’ll have this huge frontier model that will keep getting bigger, because the ultimate goal is to do things that human cannot do, like new scientific discoveries,” using LLMs, he said. The long tail of AI applications will see the technology “embedded a bit everywhere, like we have today with the internet.”

Edited by Stacy Elliott.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Continue Reading

24/7 Cryptocurrency News

Elon Musk Warns AI May Go Rogue, Here’s Why

Published

on


Tesla CEO Elon Musk cautioned against the potential risks of artificial intelligence (AI) during his appearance at the Future Investment Initiative conference in Riyadh. Addressing global investors and officials remotely, Musk expressed concern over AI’s trajectory, stating there is a 10% to 20% chance that AI development could take a dangerous path. 

However, despite the concerns, he maintained an optimistic outlook, suggesting that there remains an 80% to 90% likelihood of AI leading to positive outcomes.

Elon Musk Warns AI Could Turn Rogue, Predicts 10-20% Risk of Dangerous Path

Speaking to attendees at the Saudi forum, Musk underlined the potential for artificial intelligence to develop in unexpected and possibly harmful ways. He estimated a 10% to 20% risk of AI veering into a “dangerous” direction, urging vigilance from global leaders. 

This projection reflects Musk’s ongoing concerns about AI, which he has repeatedly emphasized as a technology requiring careful oversight and ethical considerations. However, he also struck a hopeful tone, remarking on an 80% to 90% probability of AI yielding positive advancements.

Musk further projected that artificial intelligence could surpass human capabilities in the future, potentially within the next one to two years. He noted that AI’s growth trajectory could lead to a level of sophistication that enables it to perform any task a human can accomplish. 

Elon Musk emphasized, 

“I think AI is a significant existential threat and something we should be paying close attention to. It’ll be able to do anything a human can do, possibly within the next year or two.”

Predicting the presence of about 10,000 humanoid robots by 2040, Musk highlighted the acceleration of AI in areas including automation and robotics. He pointed out that as AI’s capabilities evolve, so will its role in both routine and complex tasks. 

This rapid development, he cautioned, calls for a proactive approach to ensure that AI’s influence on society remains beneficial. Moreover, it is important to note that Bitcoin miners are embracing AI which offers a stable foundation for sustainable operations.

Concerns Over “Woke, Nihilistic” AI Models

In addition to concerns about AI going rogue, Musk raised a point about the ideological biases he sees embedded in certain AI models, specifically in the United States. He warned that AI systems developed with a “woke, nihilistic” mindset could prioritize controversial views over objective truths, impacting their alignment with unbiased, factual information. 

Musk observed that models trained in regions like the San Francisco Bay Area reflect local socio-political philosophies, which may shape the AI’s decision-making.

This issue, Musk noted, could complicate the role of AI in society, particularly as it expands into more sensitive areas. Hence, Musk advocated for artificial intelligence to prioritize truth-seeking objectives and avoid adopting socio-political biases.

Tesla CEO Endorses Donald Trump

In addition, Elon Musk openly expressed his preference for Donald Trump’s presidential candidacy, highlighting his concerns over over-regulation. Musk argued that a Donald Trump administration would foster a more progressive environment for technological advancements. More so, he noted that the current Biden administration and regulatory landscape stifles innovation. 

He referred to regulatory delays in the aerospace industry, explaining that securing a permit to launch a rocket takes longer than building the rocket itself.

Musk linked this perspective to his endorsement of Donald Trump, expressing more optimism about AI’s future under the Trump administration. 

In addition, Dogecoin co-founder expressed excitement about the potential media attention Dogecoin could receive if Donald Trump wins the 2024 U.S. election. Markus hinted that mainstream media might be compelled to cover Dogecoin more seriously, especially given Elon Musk’s vocal support.

✓ Share:

Ronny Mugendi

Ronny Mugendi is a seasoned crypto journalist with four years of professional experience, having contributed significantly to various media outlets on cryptocurrency trends and technologies. With over 4000 published articles across various media outlets, he aims to inform, educate and introduce more people to the Blockchain and DeFi world. Outside of his journalism career, Ronny enjoys the thrill of bike riding, exploring new trails and landscapes.

Disclaimer: The presented content may include the personal opinion of the author and is subject to market condition. Do your market research before investing in cryptocurrencies. The author or the publication does not hold any responsibility for your personal financial loss.





Source link

Continue Reading

artificial intelligence

Robert Downey, Jr. to ‘Sue All Future Executives’ Who Create His AI Digital Replica

Published

on



“Iron Man” star Robert Downey, Jr. is laying down some boundaries on the use of his likeness for digital replicas created with generative AI.

The actor, who appeared in eleven Marvel Cinematic Universe films as Tony Stark, told the “On With Kara Swisher” podcast that he would “sue all future executives” who attempted to create a digital replica of him.

Asked about Marvel Studios executives seeking to resurrect his portrayal of Tony Stark without his involvement, using generative AI, Downey, Jr. said that, “I’m not worried about them hijacking my character’s soul, because there’s like, three or four guys and gals who make all the decisions there anyway, and they would never do that to me, with or without me.”

When host Kara Swisher pointed out that other executives might, in the future, Downey, Jr. responded that, “I intend to sue all future executives just on spec,” and that even after his death, “My law firm will still be very active.”

The actor was sanguine about the prospect of digital replicas, noting that, “I feel about it minimally because I have an actual emotional life that’s occurring that doesn’t have a lot of room for that.”

That may be because Downey, Jr. has already had his likeness scanned and used on-screen, with digital doubles used for action sequences in the “Avengers” films, and the actor himself being digitally de-aged for 2016’s “Captain America: Civil War.”

Downey, Jr. is currently grappling with the implications of AI on Broadway in the play “McNEAL,” in which he plays a novelist who uses the technology to help him write his latest work.

The play’s writer, Ayed Akhtar, used generative AI while penning the script, he told Swisher.

“One of the things that I wanted the play to do was to create that sense of magic,” he said, adding that, “you have that in the final monologue, where the monologue appears generated by AI, that there’s a sense of magic that this is actually happening—because I feel that that’s been my experience.” He conceded that, “I had to imitate the AI in order to make it work for the play; I couldn’t get the AI to do it itself.”

“Its ability to now analyze a story and identify the weaknesses and potential opportunities is the thing that has been most surprising.” Akhtar added.

Actors and AI

The film industry is divided over the use of generative AI, with actor Nicolas Cage recently calling the technology “inhumane,” and accusing the studios of wanting to use AI to “change your face after you’ve already shot it.”

Under a 2023 agreement struck between actors’ union SAG-AFTRA and the Alliance of Motion Pictures and Television Producers, studios are permitted to create “employment based digital replicas” (EBDRs) made with the actor’s participation, and independently created digital replicas (ICDRs) made without the direct involvement of the original actor.

While the actor’s union has reached an agreement with motion picture producers, SAG-AFTRA is currently striking over the use of artificial intelligence in video game productions. The union is seeking a deal that would require video game producers to secure consent from performers over the use of AI to replicate their voices, likenesses and performances, as well as compensating performers for the use of their digital replicas in games.

Edited by Stacy Elliott.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Continue Reading
Advertisement [ethereumads]

Trending

    wpChatIcon