Connect with us

artificial intelligence

Pride, Prejudice and Pixels: Meet an AI Elizabeth Bennet at Jane Austen’s House

Published

on


It is a truth universally acknowledged that a single man in possession of a good fortune must be in want of an AI.

At least, that’s what Jane Austen’s House thinks. The museum in Hampshire, England—where the author lived and worked—has teamed up with AI firm StarPal and the University for the Creative Arts (UCA) to create “Lizzy,” an AI avatar based on Austen’s Pride and Prejudice heroine Elizabeth Bennet.

“It is so exciting to finally be able to lift Elizabeth Bennet off the page and to be able to have real-time conversations with her,” said Sophy Smith, director of games and creative technology at UCA, in a press release.

“This technology has the potential to transform experiences within both the museum and heritage, as well the education sector,” Smith added, adding that the technology will enable museum visitors to “engage directly” with Austen’s character.

Creating Lizzy

The first step in creating an AI avatar was to pick a suitable fictional character.

“There are lots of AI avatars, but these were people that existed in the past,” Lauren Newport-Quinn, project manager for UCA’s Games and Innovation Nexus, told Decrypt. “We thought it’d be nice to do something with a fictional character, where no one’s been able to pick their brains before.”

Hair designs for “Lizzy.” Image: UCA/Chloe Kethero

The team debated which character would be best suited to the project, which needed “someone who has a lot to say—has some strong opinions—who’s very well rounded and could give good advice,” Newport-Quinn said. “That’s when we landed on Elizabeth Bennett.”

To create Lizzy’s knowledge bank, StarPal and UCA turned to a selection of novels, manuscripts, and period-accurate information curated with the help of Jane Austen’s House researchers.

“It was basically anything that was instructed by the museum director as personal knowledge that she should have,” Newport-Quinn said. As well as Pride and Prejudice itself, Lizzy draws on “scholarly studies on Pride and Prejudice, the works of Jane Austen as a whole, and studies on her life.” This was supplemented with demographic and lifestyle information from the Regency era.

The avatar’s dress and hair. meanwhile, were designed and created by students from the Games Arts and Digital Fashion courses at UCA, drawing on fashion templates from the Regency period.

Costume concepts for “Lizzy.” Image: UCA/Milo Ebsworth

“It was exciting bringing to life the cut of dress, accessories, and embroideries—all inspired by historical drawings and descriptions,” said UCA MA Digital Fashion student Anya Haber, who created Lizzy’s dress in 3D. She added that”it showed how useful technology can be in a historical setting, letting fans engage with fictional characters.”

Conversations with AI avatars could be “an easier way to learn for certain learning styles,” Newport-Quinn explained. “If you’re not a visual passive learner, where reading something you don’t digest that information as well—if you have a conversation with someone, that might be able to enhance your level of knowledge on that topic.”

Smith affirmed that AI avatars could be used as educational tools, where “instead of only having text-based revision guides, students can now learn about literature by speaking directly to the characters.”

A wireframe render of “Lizzy.” Image: UCA

AI avatars

AI avatars are increasingly being used to bring fictional characters, dead celebrities, and even digital twins to life. Earlier this year, AI firm Soul Machines created an avatar of Marilyn Monroe, while London-based developer Synthesia has created “Personal Avatars” that enable users to create a digital video duplicate of themselves that can be used on social media, in marketing campaigns and training videos.

British actor and musician FKA Twigs revealed in a U.S. Senate hearing earlier this year that she had created just such a digital twin, explaining that it is “not only trained in my personality, but […] can also use my exact tone of voice to speak many languages,” and that it could help her reach a more global fanbase.

But the question of who controls AI-generated likenesses has raised concerns. In October, a bipartisan group of U.S. Senators introduced the “No Fakes Act,” which aims to outlaw the creation of AI-generated likenesses without consent.

Edited by Ryan Ozawa.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

artificial intelligence

Prime-Time AI: Sam Altman Gives Oprah an OpenAI Primer

Published

on



In an in-depth conversation with TV and film icon Oprah Winfrey on Thursday, OpenAI CEO Sam Altman shared his thoughts on artificial intelligence and what the future holds for the relationship between humans and computers.

In one of several interviews for Winfrey’s prime-time special “AI and the Future of Us,” Altman shed light on the transformative potential of this technology, as well as the critical challenges that developers and policymakers must address.

“Four years ago, most of the world, if they found out about AI, thought about self-driving cars or some other thing,” he told Winfrey. “It was only in 2022 when first-time people said, ‘Okay, this ChatGPT thing, this computer talks to me, now that’s new.’ And then since then, if you look at how much better it’s gotten, it’s been a pretty steep rate of improvement.”

Altman called AI the “next chapter of computing,” which allows computers to understand, predict, and interact with their human operators.

“We have figured out how to make computers smarter, to understand more, to be able to be more intuitive and more useful,” he said.

When asked to describe how ChatGPT works, Altman went back to basics, saying the core of ChatGPT’s capabilities lies in its ability to predict the next word in a sequence, a skill honed through being trained on large amounts of text data.

“The most basic level, we are showing the system 1,000 words in a sequence and asking it to predict what word comes next, and doing that again and again and again,” he explained, comparing it to when a smartphone attempts to predict the next work in a text message. “The system learns to predict, and then in there, it learns the underlying concepts.”

During the segment, Winfrey noted that a lack of trust led to a major shakeup at OpenAI in 2022. In November of that year, Altman was abruptly fired as CEO of OpenAI, with the board citing a lack of trust in Altman’s leadership—although he was reinstated a week later.

“So the bar on this is clearly extremely high—the best thing that we can do is to put this technology in the hands of people,” Altman said. “Talk about what it is capable of, what it’s not, what we think is going to come, what we think might come, and give our best advice about how society should decide to use [AI].”

“We think it’s important to not release something which we also might get wrong and build up that trust over time, but it is clear that this is going to be a very impactful technology, and I think a lot of scrutiny is thus super warranted,” he added.

One of the concerns raised during the interview was the need for diversity in the AI industry, with Winfrey pointing out that predominantly white males currently dominate the field.

“Obviously, we want everybody to see themselves in our products,” Altman said. “We also want the industry workforce to be much more diverse than it is, and there’s slower-than-we’d-like progress, but there is progress there,” he said, expressing a commitment to ensuring that the benefits of AI are accessible to all.

Altman also highlighted OpenAI’s collaboration with policymakers in developing safer artificial intelligence, saying that he speaks with members of the U.S. government—from the White House to Congress—multiple times a week.

Last month, OpenAI and Anthropic announced the establishment of a formal collaboration with the U.S. AI Safety Institute (AISI). In the agreement, the institute would have access to new models of ChatGPT and Claude from each company, respectively, prior to and following their public release.

Altman said collaboration between AI developers and policymakers is crucial, as well as safety testing of AI models.

“A partnership between the companies developing this technology and governance is really important; one of the first things to do, and this is now happening, is to get the governments to start figuring out how to do safety testing on these systems—like we do for aircraft or new medicines or things like that,” Altman said. “And then I think from there, if we can get good at that now, we’ll have an easier time figuring out exactly what the regulatory framework is later.”

When Winfrey told Altman that he’s been called the most powerful and perhaps most dangerous man on the planet, the CEO pushed back.

“I don’t feel like the most powerful person or anything even close to that,” he said. “I feel the opportunity—responsibility in a positive way—to get to nudge this in a direction that I think can be really good for people.”

“That is a serious, exciting, somewhat nerve-wracking thing, but it’s something that I feel very deeply about, and I realize I will never get to touch anything this important again,” Altman added.

Edited by Ryan Ozawa.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Continue Reading

artificial intelligence

OpenAI Launches New ‘01’ Model That Outperforms ChatGPT-4o

Published

on


OpenAI has introduced a new family of models and made them available Thursday on its paid ChatGPT Plus subscription tier, claiming that it provides major improvements in performance and reasoning capabilities.

“We are introducing OpenAI o1, a new large language model trained with reinforcement learning to perform complex reasoning,” OpenAI said in an official blog post, “o1 thinks before it answers.” AI industry watchers had expected the top AI developer to deploy a new “strawberry” model for weeks, although distinctions between the different models under development are not publicly disclosed.

OpenAI describes this new family of models as a big leap forward, so much so that they changed their usual naming scheme, breaking from the ChatGPT-3, ChatGPT-3.5, and ChatGPT-4o series.

“For complex reasoning tasks, this is a significant advancement and represents a new level of AI capability,” OpenAI said. “Given this, we are resetting the counter back to one and naming this series OpenAI o1.”

Key to the operation of these new models is that they “take their time” to think before acting, the company noted, and use “chain-of-thought” reasoning to make them extremely effective at complex tasks.

Notably, even the smallest model in this new lineup surpasses the top-tier GPT-4o in several key areas, according to AI testing benchmarks shared by Open AI—particularly OpenAI’s comparisons on challenges considered to have PhD-level complexity.

The newly released models emphasize what OpenAI calls “deliberative reasoning,” where the system takes additional time to work internally through its responses. This process aims to produce more thoughtful, coherent answers, particularly in reasoning-heavy tasks.

OpenAI also published internal testing results showing improvements over GPT-4o in such tasks as coding, calculus, and data analysis. However, the company disclosed that OpenAI 01 showed less drastic improvement in creative tasks like creative writing. (Our own subjective tests placed OpenAI offerings behind Claude AI in these areas.) Nonetheless, the results of its new model were rated well overall by human evaluators.

The new model’s capabilities, as noted, implement the chain-of-thought AI process during inference. In short, this means the model uses a segmented approach to reason through a problem step by step before providing a final result, which is what users ultimately see.

“The o1 model series is trained with large-scale reinforcement learning to reason using chain of thought,” OpenAI says in the o1 family’s system card. “Training models to incorporate a chain of thought before answering has the potential to unlock substantial benefits—while also increasing potential risks that stem from heightened intelligence.”

The broad assertion leaves room for debate about the true novelty of the model’s architecture among technical observers. OpenAI has not clarified how the process diverges from token-based generation: is it an actual resource allocation to reasoning, or a hidden chain-of-thought command—or perhaps a mixture of both techniques?

A previous open-source AI model called Reflection had experimented with a similar reasoning-heavy approach but faced criticism for its lack of transparency. That model used tags to separate the steps of its reasoning, leading to what its developers said was an improvement over the outputs from conventional models.

Embedding more guidelines into the chain-of-thought process not only makes the model more accurate but also less prone to jailbreaking techniques, as it has more time—and steps—to catch when a potentially harmful result is being produced.

The jailbreaking community seems to be as efficient as ever in finding ways to bypass AI safety controls, with the first successful jailbreaks of OpenAI 01 reported minutes after its release.

It remains unclear whether this deliberative reasoning approach can be effectively scaled for real-time applications requiring fast response times. OpenAI said it meanwhile intends to expand the models’ capabilities, including web search functionality and improved multimodal interactions.

The model will also be tweaked over time to meet OpenAI’s minimum standards in terms of safety, jailbreak prevention, and autonomy.

The model was set to roll out today, however it may be released in phases, as some users have reported that the model is not available to them for testing yet.

The smallest version will eventually be available for free, and the API access will be 80% cheaper than OpenAI o1-preview, according to OpenAI’s announcement. But don’t get too excited: there’s currently a weekly rate of only 30 messages per week to test this new model for 01-preview and 50 for o1-mini, so pick your prompts wisely.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.





Source link

Continue Reading

artificial intelligence

AI, Robotics Advances Will Bring ‘Crisis of Meaning’, Elon Musk Says

Published

on



Tesla CEO Elon Musk said he sees a future where humans will have to figure out new ways to give their lives purpose, thanks to advancements in artificial intelligence and robotics.

During a panel discussion at the 2024 All-In Summit hosted by the All-In Podcast on Monday, Musk echoed comments he made during a fireside chat at the AI Safety Summit in Bletchley Park in England last year. He said he foresees a future where a human workforce is a thing of the past.

“I think the actual issue, the most likely issue, is how do we find meaning in a world where AI can do everything we can do, but better? That is perhaps the bigger challenge,” Musk said.

He noted that the Turing test, once a benchmark to measure how convincingly a computer could emulate human conversation, is no longer relevant.

“Although at this point, I know more and more people who are retired, and they seem to enjoy that life, but I think that maybe there’ll be some crisis of meaning because the computer can do everything you can do, but better,” he reiterated.

This shift, Musk said, would be aided by the development of autonomous cars and humanoid robots, which could infinitely increase economic growth and productivity.

“If you’ve got humanoid robots—when there’s no real limit on the number of humanoid robots—and they can operate very intelligently, then there’s no actual limit to the economy in it,” he said. “There’s no meaningful limit to the economy.”

Musk said that through the development of the Optimus robot at Tesla, the company has learned a lot about how the human body works and why it is shaped the way it is. He pointed to the shape of the fingers and thumb, which will guide future developments of Optimus.

“The current version of the Optimus hand has the actuators in the hand and has only 11 degrees of freedom, so it doesn’t have all the degrees of freedom of the human hand, which has—depending on how you count—roughly 25 degrees of freedom,” he said, adding that this limits how strong the robot can be.

“The next generation Optimus hand, which we have in prototype form, the actuators have moved to the forearm, just like a human, and they operate the fingers through cables, just like the human hand,” Musk said. “The next generation hand has 22 degrees of freedom, which we think is enough to do almost anything that a human can do.”

Musk, along with Tesla, is in a race to build and bring large-scale humanoid robots to the market against rival developers, including Figure AI, OpenAI-backed 1X, Meta, Nvidia, and German car manufacturer Mercedes-Benz.

Because of the push to develop AI-powered humanoid robots for home and business, Musk said he foresees them outnumbering humans two-to-one someday.

“I think the number of robots will vastly exceed the number of humans,” he said. “You have to say, ‘Who would not want their robot buddy?’ Everyone wants a robot buddy.”

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Continue Reading
Advertisement [ethereumads]

Trending

    wpChatIcon