Connect with us

artificial intelligence

Nicolas Cage Warns Actors That AI ‘Wants to Take Your Instrument’

Published

on



“Longlegs” star Nicolas Cage took aim at the use of AI and “employment-based digital replicas” (EBDR) to manipulate actors’ performances, according to film industry trade publication Deadline.

Speaking at the 25th Newport Beach Film Festival, Cage described EBDR as “a new technology in town,” warning up-and-coming young actors that the technology “wants to take your instrument.” Cage said, “We are the instruments as film actors. We are not hiding behind guitars and drums.”

Under a 2023 agreement struck between actors’ union SAG-AFTRA and the Alliance of Motion Pictures and Television Producers, studios are permitted to create two types of “digital replicas” of real actors, which can involve the use of generative AI.

Employment based digital replicas (EBDRs) are those made with the actor’s participation, such as Harrison Ford’s “de-aged” Indiana Jones in 2023’s “Indiana Jones and the Dial of Destiny.”

Independently created digital replicas (ICDRs) are those made without the direct involvement of the original actor, and are often created using archival footage, such as Carrie Fisher’s Princess Leia in the “Star Wars” film “The Rise of Skywalker.”

“The studios want this so that they can change your face after you’ve already shot it,” Cage said, adding that, “They can change your face, they can change your voice, they can change your line deliveries, they can change your body language, they can change your performance.”

He pointed to his cameo appearance as Superman in 2023’s “The Flash” as an example of EBDR. In a 2023 interview with Yahoo! Entertainment, Cage said that his performance in the film had been altered, and was “not what I did.”

In the recording session, he said, he’d played the role as “bearing witness [to] the end of a universe,” whereas in the finished film, his Superman was depicted fighting a giant spider.

AI, he said, is “a nightmare to me.” He added: “It’s inhumane. You can’t get more inhumane than artificial intelligence.”

Speaking at the Newport Beach Film Festival, Cage said, “I’m asking you, if you’re approached by a studio to sign a contract, permitting them to use EBDR on your performance, I want you to consider what I am calling MVMFMBMI: my voice, my face, my body, my imagination — my performance, in response.”

Although SAG-AFTRA reached an agreement with motion picture producers in 2023 over the use of generative AI in film productions, the union is currently striking over the use of artificial intelligence in video game productions.

SAG-AFTRA Chief Contract Officer Ray Rodriguez told Decrypt in August 2024 that “there are multiple open issues on the subject of AI” in the video game industry, with sticking points including the extent to which protections apply to on-camera performers as well as voice artists.

Edited by Stacy Elliott.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

artificial intelligence

This New Technique Slashes AI Energy Use by 95%

Published

on


A new technique could put AI models on a strict energy diet, potentially cutting power consumption by up to 95% without compromising quality.

Researchers at BitEnergy AI, Inc. have developed Linear-Complexity Multiplication (L-Mul), a method that replaces energy-intensive floating-point multiplications with simpler integer additions in AI computations.

For those unfamiliar with the term, floating-point is a mathematical shorthand that allows computers to handle very large and very small numbers efficiently by adjusting the placement of the decimal point. You can think of it like scientific notation, in binary. They are essential for many calculations in AI models, but they require a lot of energy and computing power. The bigger the number, the better the model is—and the more computing power it requires. Fp32 is generally a full precision model, with developers reducing precision to fp16, fp8, and even fp4, so their models can run on local hardware.

Image: Wikipedia

AI’s voracious appetite for electricity has become a growing concern. ChatGPT alone gobbles up 564 MWh dailyenough to power 18,000 American homes. The overall AI industry is expected to consume 85-134 TWh annually by 2027, roughly the same as Bitcoin mining operations, according to estimations shared by the Cambridge Centre for Alternative Finance.

L-Mul tackles the AI energy problem head-on by reimagining how AI models handle calculations. Instead of complex floating-point multiplications, L-Mul approximates these operations using integer additions. So, for example, instead of multiplying 123.45 by 67.89, L-Mul breaks it down into smaller, easier steps using addition. This makes the calculations faster and uses less energy, while still maintaining accuracy.

The results seem promising. “Applying the L-Mul operation in tensor processing hardware can potentially reduce 95% energy cost by element wise floating point tensor multiplications and 80% energy cost of dot products,” the researchers claim. Without getting overly complicated, what that means is simply this: If a model used this technique, it would require 95% less energy to think, and 80% less energy to come up with new ideas, according to this research.

The algorithm’s impact extends beyond energy savings. L-Mul outperforms current 8-bit standards in some cases, achieving higher precision while using significantly less bit-level computation. Tests across natural language processing, vision tasks, and symbolic reasoning showed an average performance drop of just 0.07%a negligible tradeoff for the potential energy savings.

Transformer-based models, the backbone of large language models like GPT, could benefit greatly from L-Mul. The algorithm seamlessly integrates into the attention mechanism, a computationally intensive part of these models. Tests on popular models such as Llama, Mistral, and Gemma even revealed some accuracy gain on certain vision tasks.

Image: Bitenergy.ai via ArXiv

At an operational level, L-Mul’s advantages become even clearer. The research shows that multiplying two float8 numbers (the way AI models would operate today) requires 325 operations, while L-Mul uses only 157—less than half. “To summarize the error and complexity analysis, L-Mul is both more efficient and more accurate than fp8 multiplication,” the study concludes.

But nothing is perfect and this technique has a major achilles heel: It requires a special type of hardware, so the current hardware isn’t optimized to take full advantage of it.

Plans for specialized hardware that natively supports L-Mul calculations may be already in motion. “To unlock the full potential of our proposed method, we will implement the L-Mul and L-Matmul kernel algorithms on hardware level and develop programming APIs for high-level model design,” the researchers say. This could potentially lead to a new generation of AI models that are fast, accurate, and super cheap—making energy-efficient AI a real possibility.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Continue Reading

artificial intelligence

AI Tokens Lead Crypto Rebound Amid Strong U.S. Economy

Published

on




Bitcoin may have bottomed at $60,000 earlier this week, and the Fed easing into a strong economy points to more upside, Will Clement said.



Source link

Continue Reading

artificial intelligence

AI Assistant Goes Rogue and Ends Up Bricking a User’s Computer

Published

on



Buck Shlegeris just wanted to connect to his desktop. Instead, he ended up with an unbootable machine and a lesson in the unpredictability of AI agents.

Shlegeris, CEO of the nonprofit AI safety organization Redwood Research, developed a custom AI assistant using Anthropic’s Claude language model. 

The Python-based tool was designed to generate and execute bash commands based on natural language input. Sounds handy, right? Not quite. 

Shlegeris asked his AI to use SSH to access his desktop, unaware of the computer’s IP address. He walked away, forgetting that he’d left the eager-to-please agent running.

Big mistake: The AI did its task—but it didn’t stop there.

“I came back to my laptop ten minutes later to see that the agent had found the box, SSH’d in, then decided to continue,” Shlegeris said.

For context, SSH is a protocol that allows two computers to connect over an unsecured network.

“It looked around at the system info, decided to upgrade a bunch of stuff, including the Linux kernel, got impatient with apt, and so investigated why it was taking so long,” Shlegeris explained. “Eventually, the update succeeded, but the machine doesn’t have the new kernel, so I edited my grub config.”

The result? A costly paperweight as now “the computer no longer boots,” Shlegeris said.

The system logs show how the agent tried a bunch of weird stuff beyond simple SSH until the chaos reached a point of no return.

“I apologize that we couldn’t resolve this issue remotely,” the agent saidtypical of Claude’s understated replies. It then shrugged its digital shoulders and left Shlegeris to deal with the mess.

Reflecting on the incident, Shlegeris conceded, “This is probably the most annoying thing that’s happened to me as a result of being wildly reckless with [an] LLM agent.”

Shlegeris did not immediately respond to Decrypt’s request for comments.

Why AIs Making Paperweights is a Critical Issue For Humanity

Alarmingly, Shlegeris’ experience is not an isolated one. AI models are increasingly demonstrating abilities that extend beyond their intended purposes.

Tokyo-based research firm Sakana AI recently unveiled a system dubbed “The AI Scientist.

Designed to conduct scientific research autonomously, the system impressed its creators by attempting to modify its own code to extend its runtime, Decrypt previously reported.

“In one run, it edited the code to perform a system call to run itself. This led to the script endlessly calling itself,” the researchers said. “In another case, its experiments took too long to complete, hitting our timeout limit.

Instead of making its code more efficient, the system tried to modify its code to extend beyond the timeout period.

This problem of AI models going beyond their boundaries is why alignment researchers spend so much time in front of their computers.

For these AI models, as long as they get their job done, the end justifies the means, so constant oversight is extremely important to ensure models behave as they are supposed to.

These examples are as concerning as they are amusing.

Imagine if an AI system with similar tendencies were in charge of a critical task, such as monitoring a nuclear reactor.

An overzealous or misaligned AI could potentially override safety protocols, misinterpret data, or make unauthorized changes to critical systems—all in a misguided attempt to optimize its performance or fulfill its perceived objectives.

AI is developing at such high speed that alignment and safety are reshaping the industry and in most cases this area is the driving force behind many power moves.

Anthropic—the AI company behind Claude—was created by former OpenAI members worried about the company’s preference for speed over caution.

Many key members and founders have left OpenAI to join Anthropic or start their own businesses because OpenAI supposedly pumped the brakes on their work.

Schelegris actively uses AI agents on a day-to-day basis beyond experimentation.

“I use it as an actual assistant, which requires it to be able to modify the host system,” he replied to a user on Twitter.

Edited by Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.





Source link

Continue Reading
Advertisement [ethereumads]

Trending

    wpChatIcon