Connect with us

artificial intelligence

Need Gift-Buying Advice for That Special Someone? Our AI SantaBot is Here to Help

Published

on


If you’ve been riding the wave of Bitcoin’s latest surge or watching your crypto wallet grow, the holidays might feel like the perfect time to turn those gains into memorable gifts.

But as any seasoned gifter knows, a big price tag doesn’t always guarantee a big impression.

That’s where SantaBot—our AI agent experiment designed to help you step up your gift-giving game—can help.

The project started with a simple question: Could AI decode personal chat messages to suggest more meaningful presents to your friends and loved ones? As it turns out, it can.

Designed to take the guesswork out of giving, SantaBot digs through conversation histories to build detailed personality profiles, gathering insights about their habits, preferences, and other aspects.

It even maps out Myers-Briggs and Enneagram personality types based on how people talk to each other.

The tool provides creative personalization ideas to make each gift truly unique.

For instance, you could gift your son a baseball bat engraved with his uniform number and initials or surprise your wife with a smartwatch featuring straps in her favorite color. These small, thoughtful touches can make a big impact.

Hands-On

Before giving my bot to the world, I decided to give it a try. I uploaded my WhatsApp conversation with my wife and asked her to tell me the best gifts I could give her.

SantaBot psychoanalyzed my wife’s WhatsApp messages, and folks, we’ve got ourselves a certified Type 2 personality (Helper, Giver) with a major love for desserts and fitness.

According to the bot, she’s what personality experts call an ESFJ (Extraverted, Sensing, Feeling, Judging), basically someone who’d organize a group hug and then make sure everyone filled out a satisfaction survey afterward.

The personality analysis shows she ranks high in agreeableness and extraversion, moderate in neuroticism (their words, not mineI value my life), and has a practical streak that somehow doesn’t apply to transportation choices.

The bot didn’t just stop at basic personality traits. It went full CIA analyst on our conversations, noting some interesting things like her use of “Te quiero mucho mi flaquito” (translation: “I love you, my skinny one”) to her appreciation for little details instead of luxurious things.

SantaBot even picked up on her Uber addiction faster than our credit card company.

It painted a picture of someone who’s health-conscious but won’t walk two blocks if there’s a car service available—which is not 100% but is easy to infer if the only thing you know about her is our conversation history.

Now, for the gift suggestions, these were some of the most exciting picks.:

For the practical side:

  • A Miniature Chocolate Fountain with a customized base that says “Edgli’s [her nickname] Sweet Spot.” (considering she showed interest in buying one for future events)
  • An “Uber Survival Kit” with a prepaid card (cheaper than buying her an actual car) or a mug with “Boss of Uber Requests” printed on it.
  • A literal vault for her chocolate stash with “Keep Out, Unless You’re Amorsito” engraved on it—so I stay away from it.

For the fancy pants moments:

  • A custom box with desserts from Venezuela and Brazil.
  • A spa kit named “Aromas de Edgli” (much fancier than “Smell Like My Wife”).
  • A leather planner embossed with “Amorsito’s Plans.”
  • A Star Map Print showcasing the constellations of a meaningful date, like the day we met or the day our daughter was born.

And for when money is no object:

  • A smartwatch to help her keep track of her fitness activity and burn calories.
  • A designer handbag with her initials embossed.
  • A weekend getaway featuring a chocolate-tasting experience in Gramado (basically a desert safari in one of Brazil’s best tourist places).

It also recommended some funny gift ideas, including a “Drama Queen Survival Kit” (which she would hate), a “Custom Emoji Pillow” (which she would love) and a personalized apron with a nickname like “Chef Sass Master”

I compared SantaBot head-to-head against regular ChatGPT to see how it stacked up.

The difference was clear—while standard ChatGPT played it safe with generic suggestions, our specialized version picked up on subtle hints.

It’s not like its suggestions were useless, rather than less personal.

How to Get Santa Bot’s Help

To use our tool, you must upload your conversation history and interact with the model, asking for recommendations.

You can then go with follow-up questions, asking for more suggestions, personalization ideas, providing more contextual or personalized information, etc. The more information the AI handles, the better the results should be.

Some good starting prompts can be as simple as “Please carefully analyze this conversation and tell me what presents she/he would like” to things as complex as “What are the best presents I could give to a person with an ENFP type of personality.”

You can also play with the tool and iterate with it. Once it provides a reply, you can ask for more suggestions, ask for funnier recommendations, ask for more romantic gift recommendations, etc. It all depends on your intentions and expectations.

Exporting chats is pretty straightforward, depending on which messaging app you use.

WhatsApp users can export chats from the app, though iMessage folks need to use tools like iMazing to get their conversation data. Similar options exist for Telegram, Facebook, Instagram, and TikTok users. Just google them.

Also, ensure you only upload text conversations, so export your data without photos, voice notes, or documents.

This, of course, means there are privacy concerns that you should address. SantaBot requires access to those conversations to create its detailed profiles.

Sharing such personal data without permission could be unethical. The fix isn’t perfect, but it works: Ask the other person for permission to use the conversation for an AI experiment. If they agree, you’re good to go.

If you don’t want to go that route, you can take other steps.

First, names should be anonymized in exported chats by replacing them with placeholders. For this, open your TXT file, select the option to edit and replace text (this will vary according to your text processor), and choose to change the name for the placeholder in every instance. Save that file and upload it to ChatGPT.

Second, ensure OpenAI cannot use that chat to train its models. For that, the first thing you can do is adjust your ChatGPT settings to disable memories. To do so, click on your profile picture in the top right corner of ChatGPT, go to settings, personalize it, and turn off “Memory.”

Alternatively, you can click on “Manage” after your conversation is done and delete any memory that could be created mentioning your latest chat.

Additionally, you can prevent OpenAI from training its model with your conversation by blocking the capability of using your data—which is allowed by default.

To change that, go to Settings, click on Data controls, and turn off the option “Improve the model for everyone.” This sounds pretty, but in non-corpo language, it can be translated as “Let OpenAI use your conversations to train its models for free and probably charge you more once they get more powerful.”

Overall, building GPTs and specialized agents can bring practical solutions to everyday challenges, like the art of gifting.

Our AI may surprise you with clever ideas that turn ordinary presents into unforgettable gestures so you can be as successful in your family reunions as you think you are trading crypto.

At the very least, when the presents miss the mark, you’ll have something better to blame than your lack of creativity.

Edited by Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.





Source link

artificial intelligence

AI Crypto Startup O.XYZ Faces Allegations of Misrepresentation and Internal Turmoil: Sources

Published

on



O.XYZ, a blockchain and AI company touting crypto and artificial intelligence services, is facing allegations of falsely inflating its technological claims and engaging in aggressive tactics to suppress dissent within the company.

While founder Ahmad Shadid has defended both his and the company’s actions, multiple sources familiar with the company’s operations who spoke with Decrypt have refuted public claims, alleging widespread misrepresentation of O.XYZ’s capabilities.

O.XYZ positions itself as a community-owned “Super AI” ecosystem. The company claims to leverage substantial GPU computing power, purportedly deploying tens of thousands of open-source models, enabling it to execute a wide array of tasks.

Sources claim the company has exaggerated its capabilities, falsely stating it can connect to over 100,000 AI models, runs 20 times faster than competitors, and owns powerful hardware it doesn’t actually possess. 

It’s also accused of inflating the value of its satellite program and misrepresenting its token launch, raising questions about transparency and accountability.

As a result of those allegations, sources claim holders of the company’s recently launched O.XYZ token are at risk of being harmed.

In an emailed statement to Decrypt, Shadid issued a detailed response to concerns raised about the company’s claims, insisting that O.XYZ’s promotional language is “forward-looking” and aligned with its development roadmap. 

However, sources who spoke with Decrypt dispute this characterization, pointing to materials on O.XYZ’s website and investor presentations that describe capabilities as existing rather than aspirational.

In June, Shadid stepped down as CEO of Solana-based decentralized infrastructure provider IO.net—a company he founded—amid allegations surrounding his past and misreported company metrics, citing his decision as a move to reduce distractions and focus on the company’s growth.

A public statement Shadid published amid his departure from IO has since been deleted from Twitter (aka X). To avoid conflicts and distance itself from Shadid, IO agreed to offer a “six-figure severance,” one source familiar with the matter told Decrypt. IO earlier this year raised $30 million in a Series A round from notable crypto industry investors, including Hack VC, Solana Labs, Aptos Labs, Multicoin Capital, and Animoca Brands. 

Several sources who have previously worked with Shadid described him as a “smart, capable individual” who manages each and every time to assemble a highly experienced team for the job. However, both a former employee and an investor who wished not to be named stated they would “never work with Shadid again.”

Disputed infrastructure and performance claims

In response to allegations that O.XYZ is exaggerating its capabilities, Shadid highlighted the company’s investments in U.S.-based Cerebras Systems hardware and plans to deploy cutting-edge AI data centers, asserting that its infrastructure supports “20x faster” AI processing. He cited benchmarks of Cerebras WSE-3 chips as evidence of O.XYZ’s performance leap.

Sources dismissed those claims as “patently false,” instead alleging O.XYZ has yet to acquire the necessary hardware for such operations, despite Shadid’s claims of “advanced talks” with Cerebras.

“There’s no internal benchmarking supporting the 20x figure,” one source said, who noted that the company’s routing technology might actually increase latency rather than reduce it.

O.XYZ has also promoted itself as being powered by SpaceX’s Starlink, with Shadid emphasizing the technology’s integration within the company’s operations. 

He further clarified that the claim refers to O.XYZ’s ongoing infrastructure roadmap, including plans for “maritime connectivity solutions” and future AI capabilities in space slated for 2026.

However, sources strongly contest that narrative. Instead, they assert Starlink is only used for basic internet connectivity in remote areas and plays no role in AI processing. 

“No satellite designs exist within the company, and there’s no engineering team capable of developing such capabilities,” one source told Decrypt. They added that there are no ongoing discussions with SpaceX, despite the impression created in marketing materials.

Shadid’s responses also addressed the display of logos from major organizations such as OpenAI and Neuralink, claiming they were used to represent contributors’ backgrounds rather than formal partnerships. 

However, sources allege that this practice misleads investors and customers, noting that contributors requested their logos be removed after leaving the company—a request that allegedly has yet to be resolved.

Controversy around token launch

The company’s O.XYZ token launch on October 15 across multiple “lesser-known” exchanges has been another flashpoint. While the token only averages around $23,000 in daily trading volume across all exchanges—with a mere $8.1 million fully diluted token supply valuation—sources say it’s only a matter of time before token holders are harmed.

“There is no way to use the token to pay for anything like API calls for the company AI, nor does the token legally entitle the holder to any assets of the company,” one of the sources said.

Shadid characterized the “initial liquidity pool activation” as occurring during a “testing phase,” which was “immediately communicated to the community.”

“After a thorough market condition analysis, we made a strategic decision to proceed with the launch rather than withdraw the liquidity, effectively advancing our planned token release timeline,” Shadid said.

He added: “This decision was communicated transparently through multiple channels, including Discord and internal communications,” he said. “While the initial activation was unplanned, our subsequent decision to maintain the token’s availability was deliberate and strategic. We maintain comprehensive documentation of all communications throughout this process, demonstrating our commitment to transparency with both our community and stakeholders.”

One former employee who did not wish to be named, for fear of reprisal, shared that they were offered financial incentives tied to a non-disclosure agreement after questioning the ethical implications of the launch. 

Another source alleged, “Shadid was testing trading algorithms when the ‘accident’ occurred.”

“Was testing my O.CAPITAL market maker quant systems, and it created a pool on Uniswap, and tokens went live by mistake,” according to a screenshot reviewed by Decrypt of a message from Shadid posted to a general Slack channel for all employees to see. “I can’t take it down.”

Secret recordings also reviewed by Decrypt appear to contradict Shadid’s explanation. Sources say the token launch was instead deliberate, and employees were told differing stories—some that it was intentional, others that it was a “mistake.”

“Totally against what the public-facing company docs would have people believe with lines of transparency and community ownership,” one source said. “Ahmad owns all the tokens effectively and can dump them at a whim.”

Allegations of retaliatory practices

Sources claim that O.XYZ has used non-disclosure agreements to suppress dissent. They described a culture of retaliation, including terminations following inquiries into the company’s operations. 

“The NDAs are being weaponized to silence legitimate concerns,” one source alleged.

Shadid defended the company’s contractor-based employment model and strict confidentiality agreements, stating these practices are standard in the industry. 

Shadid has not directly addressed the allegations of retaliation, but emphasized O.XYZ’s commitment to “clear, accurate communication” and “comprehensive documentation” of its strategic goals.

In any case, the allegations have led several former employees and contributors to seek legal counsel. Sources Decrypt spoke to say those former employees are now exploring further options to shed light on O.XYZ’s alleged practices.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Continue Reading

AI

Decentralized AI Project Morpheus Goes Live on Mainnet

Published

on



Morpheus went live on a public testnet, or simulated experimental environment, in July. The project promises personal AIs, also known as “smart agents,” that can empower individuals much like personal computers and search engines did in decades past. Among other tasks, agents can “execute smart contracts, connecting to users’ Web3 wallets, DApps, and smart contracts,” the team said.



Source link

Continue Reading

artificial intelligence

How the US Military Says Its Billion Dollar AI Gamble Will Pay Off

Published

on



War is more profitable than peace, and AI developers are eager to capitalize by offering the U.S. Department of Defense various generative AI tools for the battlefields of the future.

The latest evidence of this trend came last week when Claude AI developer Anthropic announced that it was partnering with military contractor Palantir and Amazon Web Services (AWS) to provide U.S. intelligence and the Pentagon access to Claude 3 and 3.5.

Anthropic said Claude will give U.S. defense and intelligence agencies powerful tools for rapid data processing and analysis, allowing the military to perform faster operations.

Experts say these partnerships allow the Department of Defense to quickly adopt advanced AI technologies without needing to develop them internally.

“As with many other technologies, the commercial marketplace always moves faster and integrates more rapidly than the government can,” retired U.S. Navy Rear Admiral Chris Becker told Decrypt in an interview. “If you look at how SpaceX went from an idea to implementing a launch and recovery of a booster at sea, the government might still be considering initial design reviews in that same period.”

Becker, a former Commander of the Naval Information Warfare Systems Command, noted that integrating advanced technology initially designed for government and military purposes into public use is nothing new.

“The internet began as a defense research initiative before becoming available to the public, where it’s now a basic expectation,” Becker said.

Anthropic is only the latest AI developer to offer its technology to the U.S. government.

Following the Biden Administration’s memorandum in October on advancing U.S. leadership in AI, ChatGPT developer OpenAI expressed support for U.S. and allied efforts to develop AI aligned with “democratic values.” More recently, Meta also announced it would make its open-source Llama AI available to the Department of Defense and other U.S. agencies to support national security.

During Axios’ Future of Defense event in July, retired Army General Mark Milley noted advances in artificial intelligence and robotics will likely make AI-powered robots a larger part of future military operations.

“Ten to fifteen years from now, my guess is a third, maybe 25% to a third of the U.S. military will be robotic,” Milley said.

In anticipation of AI’s pivotal role in future conflicts, the DoD’s 2025 budget requests $143.2 billion for Research, Development, Test, and Evaluation, including $1.8 billion specifically allocated to AI and machine learning projects.

Protecting the U.S. and its allies is a priority. Still, Dr. Benjamin Harvey, CEO of AI Squared, noted that government partnerships also provide AI companies with stable revenue, early problem-solving, and a role in shaping future regulations.

“AI developers want to leverage federal government use cases as learning opportunities to understand real-world challenges unique to this sector,” Harvey told Decrypt. “This experience gives them an edge in anticipating issues that might emerge in the private sector over the next five to 10 years.

He continued: “It also positions them to proactively shape governance, compliance policies, and procedures, helping them stay ahead of the curve in policy development and regulatory alignment.”

Harvey, who previously served as chief of operations data science for the U.S. National Security Agency, also said another reason developers look to make deals with government entities is to establish themselves as essential to the government’s growing AI needs.

With billions of dollars earmarked for AI and machine learning, the Pentagon is investing heavily in advancing America’s military capabilities, aiming to use the rapid development of AI technologies to its advantage.

While the public may envision AI’s role in the military as involving autonomous, weaponized robots advancing across futuristic battlefields, experts say that the reality is far less dramatic and more focused on data.

“In the military context, we’re mostly seeing highly advanced autonomy and elements of classical machine learning, where machines aid in decision-making, but this does not typically involve decisions to release weapons,” Kratos Defense President of Unmanned Systems Division, Steve Finley, told Decrypt. “AI substantially accelerates data collection and analysis to form decisions and conclusions.”

Founded in 1994, San Diego-based Kratos Defense has partnered extensively with the U.S. military, particularly the Air Force and Marines, to develop advanced unmanned systems like the Valkyrie fighter jet. According to Finley, keeping humans in the decision-making loop is critical to preventing the feared “Terminator” scenario from taking place.

“If a weapon is involved or a maneuver risks human life, a human decision-maker is always in the loop,” Finley said. “There’s always a safeguard—a ‘stop’ or ‘hold’—for any weapon release or critical maneuver.”

Despite how far generative AI has come since the launch of ChatGPT, experts, including author and scientist Gary Marcus, say current limitations of AI models put the real effectiveness of the technology in doubt.

“Businesses have found that large language models are not particularly reliable,” Marcus told Decrypt. “They hallucinate, make boneheaded mistakes, and that limits their real applicability. You would not want something that hallucinates to be plotting your military strategy.”

Known for critiquing overhyped AI claims, Marcus is a cognitive scientist, AI researcher, and author of six books on artificial intelligence. In regards to the dreaded “Terminator” scenario, and echoing Kratos Defense’s executive, Marcus also emphasized that fully autonomous robots powered by AI would be a mistake.

“It would be stupid to hook them up for warfare without humans in the loop, especially considering their current clear lack of reliability,” Marcus said. “It concerns me that many people have been seduced by these kinds of AI systems and not come to grips with the reality of their reliability.”

As Marcus explained, many in the AI field hold the belief that simply feeding AI systems more data and computational power would continually enhance their capabilities—a notion he described as a “fantasy.”

“In the last weeks, there have been rumors from multiple companies that the so-called scaling laws have run out, and there’s a period of diminishing returns,” Marcus added. “So I don’t think the military should realistically expect that all these problems are going to be solved. These systems probably aren’t going to be reliable, and you don’t want to be using unreliable systems in war.”

Edited by Josh Quittner and Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Continue Reading
Advertisement [ethereumads]

Trending

    wpChatIcon