artificial intelligence
Need Gift-Buying Advice for That Special Someone? Our AI SantaBot is Here to Help
Published
3 months agoon
By
admin
If you’ve been riding the wave of Bitcoin’s latest surge or watching your crypto wallet grow, the holidays might feel like the perfect time to turn those gains into memorable gifts.
But as any seasoned gifter knows, a big price tag doesn’t always guarantee a big impression.
That’s where SantaBot—our AI agent experiment designed to help you step up your gift-giving game—can help.
The project started with a simple question: Could AI decode personal chat messages to suggest more meaningful presents to your friends and loved ones? As it turns out, it can.
Designed to take the guesswork out of giving, SantaBot digs through conversation histories to build detailed personality profiles, gathering insights about their habits, preferences, and other aspects.
It even maps out Myers-Briggs and Enneagram personality types based on how people talk to each other.
The tool provides creative personalization ideas to make each gift truly unique.
For instance, you could gift your son a baseball bat engraved with his uniform number and initials or surprise your wife with a smartwatch featuring straps in her favorite color. These small, thoughtful touches can make a big impact.
Hands-On
Before giving my bot to the world, I decided to give it a try. I uploaded my WhatsApp conversation with my wife and asked her to tell me the best gifts I could give her.
SantaBot psychoanalyzed my wife’s WhatsApp messages, and folks, we’ve got ourselves a certified Type 2 personality (Helper, Giver) with a major love for desserts and fitness.
According to the bot, she’s what personality experts call an ESFJ (Extraverted, Sensing, Feeling, Judging), basically someone who’d organize a group hug and then make sure everyone filled out a satisfaction survey afterward.

The personality analysis shows she ranks high in agreeableness and extraversion, moderate in neuroticism (their words, not mine—I value my life), and has a practical streak that somehow doesn’t apply to transportation choices.
The bot didn’t just stop at basic personality traits. It went full CIA analyst on our conversations, noting some interesting things like her use of “Te quiero mucho mi flaquito” (translation: “I love you, my skinny one”) to her appreciation for little details instead of luxurious things.
SantaBot even picked up on her Uber addiction faster than our credit card company.
It painted a picture of someone who’s health-conscious but won’t walk two blocks if there’s a car service available—which is not 100% but is easy to infer if the only thing you know about her is our conversation history.
Now, for the gift suggestions, these were some of the most exciting picks.:
For the practical side:
- A Miniature Chocolate Fountain with a customized base that says “Edgli’s [her nickname] Sweet Spot.” (considering she showed interest in buying one for future events)
- An “Uber Survival Kit” with a prepaid card (cheaper than buying her an actual car) or a mug with “Boss of Uber Requests” printed on it.
- A literal vault for her chocolate stash with “Keep Out, Unless You’re Amorsito” engraved on it—so I stay away from it.
For the fancy pants moments:
- A custom box with desserts from Venezuela and Brazil.
- A spa kit named “Aromas de Edgli” (much fancier than “Smell Like My Wife”).
- A leather planner embossed with “Amorsito’s Plans.”
- A Star Map Print showcasing the constellations of a meaningful date, like the day we met or the day our daughter was born.
And for when money is no object:
- A smartwatch to help her keep track of her fitness activity and burn calories.
- A designer handbag with her initials embossed.
- A weekend getaway featuring a chocolate-tasting experience in Gramado (basically a desert safari in one of Brazil’s best tourist places).

It also recommended some funny gift ideas, including a “Drama Queen Survival Kit” (which she would hate), a “Custom Emoji Pillow” (which she would love) and a personalized apron with a nickname like “Chef Sass Master”
I compared SantaBot head-to-head against regular ChatGPT to see how it stacked up.
The difference was clear—while standard ChatGPT played it safe with generic suggestions, our specialized version picked up on subtle hints.
It’s not like its suggestions were useless, rather than less personal.

How to Get Santa Bot’s Help
To use our tool, you must upload your conversation history and interact with the model, asking for recommendations.
You can then go with follow-up questions, asking for more suggestions, personalization ideas, providing more contextual or personalized information, etc. The more information the AI handles, the better the results should be.
Some good starting prompts can be as simple as “Please carefully analyze this conversation and tell me what presents she/he would like” to things as complex as “What are the best presents I could give to a person with an ENFP type of personality.”
You can also play with the tool and iterate with it. Once it provides a reply, you can ask for more suggestions, ask for funnier recommendations, ask for more romantic gift recommendations, etc. It all depends on your intentions and expectations.

Exporting chats is pretty straightforward, depending on which messaging app you use.
WhatsApp users can export chats from the app, though iMessage folks need to use tools like iMazing to get their conversation data. Similar options exist for Telegram, Facebook, Instagram, and TikTok users. Just google them.
Also, ensure you only upload text conversations, so export your data without photos, voice notes, or documents.
This, of course, means there are privacy concerns that you should address. SantaBot requires access to those conversations to create its detailed profiles.
Sharing such personal data without permission could be unethical. The fix isn’t perfect, but it works: Ask the other person for permission to use the conversation for an AI experiment. If they agree, you’re good to go.
If you don’t want to go that route, you can take other steps.
First, names should be anonymized in exported chats by replacing them with placeholders. For this, open your TXT file, select the option to edit and replace text (this will vary according to your text processor), and choose to change the name for the placeholder in every instance. Save that file and upload it to ChatGPT.
Second, ensure OpenAI cannot use that chat to train its models. For that, the first thing you can do is adjust your ChatGPT settings to disable memories. To do so, click on your profile picture in the top right corner of ChatGPT, go to settings, personalize it, and turn off “Memory.”

Alternatively, you can click on “Manage” after your conversation is done and delete any memory that could be created mentioning your latest chat.
Additionally, you can prevent OpenAI from training its model with your conversation by blocking the capability of using your data—which is allowed by default.
To change that, go to Settings, click on Data controls, and turn off the option “Improve the model for everyone.” This sounds pretty, but in non-corpo language, it can be translated as “Let OpenAI use your conversations to train its models for free and probably charge you more once they get more powerful.”

Overall, building GPTs and specialized agents can bring practical solutions to everyday challenges, like the art of gifting.
Our AI may surprise you with clever ideas that turn ordinary presents into unforgettable gestures so you can be as successful in your family reunions as you think you are trading crypto.
At the very least, when the presents miss the mark, you’ll have something better to blame than your lack of creativity.
Edited by Sebastian Sinclair
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Source link
You may like
President Trump Signs Executive Order to Establish Bitcoin Reserve, Crypto Stockpile
Binance Stablecoin Reserve ATH, Here’s Implication For Bitcoin
Crypto voters could swing New Jersey gubernatorial race
Mt. Gox Stirs The Market With $1 Billion Bitcoin Transfer
Texas Strategic Bitcoin Reserve Bill Passes The Senate
President Nayib Bukele Says El Salvador Will Continue Accumulating Bitcoin Despite IMF Pushback
artificial intelligence
Technique Behind ChatGPT’s AI Wins Computing’s Top Prize—But Its Creators Are Worried
Published
1 day agoon
March 6, 2025By
admin

Andrew Barto and Richard Sutton, who received computing’s highest honor this week for their foundational work on reinforcement learning, didn’t waste any time using their new platform to sound alarms about unsafe AI development practices in the industry.
The pair were announced as recipients of the 2024 ACM A.M. Turing Award on Wednesday, often dubbed the “Nobel Prize of Computing,” and is accompanied by a $1 million prize funded by Google.
Rather than simply celebrating their achievement, they immediately criticized what they see as dangerously rushed deployment of AI technologies.
“Releasing software to millions of people without safeguards is not good engineering practice,” Barto told The Financial Times. “Engineering practice has evolved to try to mitigate the negative consequences of technology, and I don’t see that being practiced by the companies that are developing.”
Their assessment likened current AI development practices like “building a bridge and testing it by having people use it” without proper safety checks in place, as AI companies seek to prioritize business incentives over responsible innovation.
The duo’s journey began in the late 1970s when Sutton was Barto’s student at the University of Massachusetts. Throughout the 1980s, they developed reinforcement learning—a technique where AI systems learn through trial and error by receiving rewards or penalties—when few believed in the approach.
Their work culminated in their seminal 1998 textbook “Reinforcement Learning: An Introduction,” which has been cited almost 80 thousand times and became the bible for a generation of AI researchers.
“Barto and Sutton’s work demonstrates the immense potential of applying a multidisciplinary approach to longstanding challenges in our field,” ACM President Yannis Ioannidis said in an announcement. “Reinforcement learning continues to grow and offers great potential for further advances in computing and many other disciplines.”
The $1 million Turing Award comes as reinforcement learning continues to drive innovation across robotics, chip design, and large language models, with reinforcement learning from human feedback (RLHF) becoming a critical training method for systems like ChatGPT.
Industry-wide safety concerns
Still, the pair’s warnings echo growing concerns from other big names in the field of computer science.
Yoshua Bengio, himself a Turing Award recipient, publicly supported their stance on Bluesky.
“Congratulations to Rich Sutton and Andrew Barto on receiving the Turing Award in recognition of their significant contributions to ML,” he said. “I also stand with them: Releasing models to the public without the right technical and societal safeguards is irresponsible.”
Their position aligns with criticisms from Geoffrey Hinton, another Turing Award winner—known as the godfather of AI—as well as a 2023 statement from top AI researchers and executives—including OpenAI CEO Sam Altman—that called for mitigating extinction risks from AI as a global priority.
Former OpenAI researchers have raised similar concerns.
Jan Leike, who recently resigned as head of OpenAI’s alignment initiatives and joined rival AI company Anthropic, pointed to an inadequate safety focus, writing that “building smarter-than-human machines is an inherently dangerous endeavor.”
“Over the past years, safety culture and processes have taken a backseat to shiny products,” Leike said.
Leopold Aschenbrenner, another former OpenAI safety researcher, called security practices at the company “egregiously insufficient.” At the same time, Paul Christiano, who also previously led OpenAI’s language model alignment team, suggested there might be a “10-20% chance of AI takeover, [with] many [or] most humans dead.”
Despite their warnings, Barto and Sutton maintain a cautiously optimistic outlook on AI’s potential.
In an interview with Axios, both suggested that current fears about AI might be overblown, though they acknowledge significant social upheaval is possible.
“I think there’s a lot of opportunity for these systems to improve many aspects of our life and society, assuming sufficient caution is taken,” Barto told Axios.
Sutton sees artificial general intelligence as a watershed moment, framing it as an opportunity to introduce new “minds” into the world without them developing through biological evolution—essentially opening the gates for humanity to interact with sentient machines in the future.
Edited by Sebastian Sinclair
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Source link
artificial intelligence
Figure AI Dumps OpenAI Deal After ‘Major Breakthrough’ in Robot Intelligence
Published
4 weeks agoon
February 10, 2025By
admin
Figure AI, a U.S.-based startup focused on building AI-powered humanoid robots, severed its ties with OpenAI last week, with CEO Brett Adcock claiming a “major breakthrough” in robot intelligence that made the partnership unnecessary.
The split came just months after the two companies announced their collaboration alongside a $675 million funding round that valued Figure at $2.6 billion to kick-start its Figure 02 robot.

“Today, I made the decision to leave our Collaboration Agreement with OpenAI,” Adcock tweeted. “Figure made a major breakthrough on fully end-to-end robot AI, built entirely in-house”. The move marked a stark reversal for Figure, which previously planned to use OpenAI’s models for its Figure 02 humanoid’s natural language capabilities.
In a separate post, Adcock explained that, over time, maintaining a partnership with OpenAI to use its LLMs started to make less sense for his company.
“LLMs are getting smarter yet more commoditized. For us, LLMs have quickly become the smallest piece of the puzzle,” Adcock wrote. “Figure’s AI models are built entirely in-house, making external AI partnerships not just cumbersome but ultimately irrelevant to our success.”
Today, I made the decision to leave our Collaboration Agreement with OpenAI
Figure made a major breakthrough on fully end-to-end robot AI, built entirely in-house
We’re excited to show you in the next 30 days something no one has ever seen on a humanoid
— Brett Adcock (@adcock_brett) February 4, 2025
The decision came amid broader changes in the AI landscape. OpenAI itself had been rebuilding its robotics team, filing a trademark application mentioning “humanoid robots“—alongside a wide array of other technologies like virtual reality, augmented reality, agents, and wearables. It began hiring for its first robotics positions last month.
Some AI enthusiasts were quick to note that the move could just be another consequence of the DeepSeek effect—which has already forced the most powerful AI companies in the world to lower the prices of all their SOTA models to remain competitive against open-source alternatives.
OpenAI provides one of the most expensive LLMs in the market—with DeepSeek R1 providing better results than OpenAI o1 while being available free, open source, uncensored, and highly customizable. Figure could simply be betting on an in-house foundational model to power its lineup without depending on OpenAI’s offerings.
Figure has already secured a deal with BMW Manufacturing to integrate humanoid robots into automotive production, and recently struck a partnership with an unnamed major U.S. client that would be its second big commercial client.
“It gives us potential to ship at high volumes—which will drive cost reduction and AI data collection,” Adcock posted on LinkedIn a week ago. “Between both customers, we believe there is a path to 100,000 robots over the next four years.”
Figure developed a data engine that powered its “embodied artificial intelligence” systems, enabling its robots to learn and adapt in real time through cloud and edge computing solutions. The company’s technology allowed its robots to respond to language prompts and perform tasks that incorporated language, vision, and action.
“We’re working on training the robot on how to do use case work at high speeds and high performance” Adcock said. “Learning the use case with AI is the only path”
OpenAI still maintains investments in other robotics ventures, including Norwegian startup 1X.
Adcock promised to reveal the fruits of Figure’s “breakthrough” within 30 days, and he wasn’t subtle with his words. He promised the announcement would be “something no one has ever seen on a humanoid.”
Guess he learned from the best.
Edited by Andrew Hayward
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Source link
artificial intelligence
Presearch Rolls Out Privacy-First Chatbot PreGPT 2.0 to Challenge Big Tech
Published
4 weeks agoon
February 6, 2025By
admin
Presearch—a decentralized and privacy-oriented search engine—has just launched PreGPT 2.0, marking the company’s latest attempt to challenge Big Tech’s dominance in the AI chatbot space.
The new release brought improved language models and a broader selection of open-source AI options, all running on a network of distributed computers rather than centralized data centers.
“Why am I so excited? Because PreGPT 2.0 is so powerful and unrestrained, that it has the potential to fundamentally disrupt the echo chamber effect that has long been manipulating conventional wisdom, amplifying the herd instinct into blind conformity,” Brenden Tacon, innovation and operations lead for Presearch, told Decrypt.
The upgraded chatbot comes with two subscription tiers: a $2 monthly basic plan running Mistral AI’s 7B model and a $5 pro version powered by Venice.ai’s more sophisticated LLMs. Both options promise to keep user data private and conversations unmonitored, with chats permanently erased upon deletion.
PreGPT 2.0’s model lineup features six of the most well-known names in the open-source AI space: Meta’s Llama-3.1-405b (a gigantic model), Llama-3.2-3b (a very small model built for efficiency) and Llama-3.3-70b (its latest LLM), Alibaba’s Qwen 32b.
It even leverages the old Dolphin 2.9 model, previously known in AI circles for being fully uncensored and powerful—and good at roleplay. The company also seems to have fine-tuned the Mistral 7B model to offer a custom version.

“This model gracefully handles a context of 8,000 Tokens, which equates to about 5,000 words, and you will be throttled to 1000 messages per month,” according to the company’s website.
This means the model will have a memory of 5,000 words and won’t be able to properly handle conversations that exceed such limit—or won’t process prompts that are that long.
What is Presearch?
Presearch, which launched in beta back in 2017 and went live in 2018, is basically a project that wants to reimagine search engine architecture with decentralized technology.
The platform processed over 12 million monthly searches through a web of independent nodes. Each node operator staked PRE tokens and lent computing power to the network, creating a self-sustaining ecosystem that scaled naturally with demand.

The idea is that a decentralized network makes the profiling of users—Google’s business model—harder and could help generate a business model that is more transparent and organic.
The platform’s advertising model is also different from what you see in Google or Bing, for example.
Instead of bidding wars for keywords, advertisers staked PRE tokens to gain visibility. The more tokens they stake, the better their placement will be—a system that reduces token circulation while creating predictable revenue.
A portion of these tokens get burned periodically, gradually decreasing the total supply from its current 590 million PRE in circulation.
PreGPT 2.0 leveraged this distributed infrastructure by collaborating with Venice.ai, a privacy-conscious AI service provider, and Salad.com, a community that shares decentralized GPU power.
The professional tier operates on Venice.ai’s high-performance network, while the basic plan is supported by Salad.com’s distributed GPU network.
Both pathways encrypt user interactions and refrain from storing chat logs, upholding Presearch’s commitment to privacy.
PRE’s tokenomics keeps the system running smoothly. Users earn up to 8 tokens daily for search queries, while node operators receive rewards based on their stake size and search volume.
This, at least in theory, seems like a win-win situation in which both users and advertisers are properly rewarded while helping the ecosystem grow.

PreGPT 2.0 is a separate AI feature added to Presearch’s toolkit; the company remains focused on its core mission of decentralized, private search.
The chatbot integration is intended to complement the search experience without overshadowing it.
The goal is to make the entire platform ideal for privacy-conscious users who want a replacement for traditional web searches and are curious about using AI tools in their everyday lives.
Hands-On with PreGPT 2.0: Promise and Limitations

Testing PreGPT 2.0 revealed a capable chatbot that prioritizes function over flash. The interface felt cleaner than competitors like Venice.ai or Hugging Chat, though it lacked image generation capabilities that have become standard elsewhere.
The integration of a system prompt feature lets users fine-tune the AI’s behavior through custom instructions, which is helpful for getting more precise responses—a sound system prompt can dramatically increase a model’s performance.
The overall experience will feel familiar to those used to tinkering with different chatbots.
This wasn’t a revolutionary leap in AI capability but rather a privacy-focused implementation of existing open-source models that are often less powerful than mainstream alternatives like GPT-4o or Deepseek.
The platform only manages plain text. It can craft a bedtime story or summarize trends, but it lacks support for Excel documents and cannot properly format CSV files, PDFs, or third-party docs.
Instead, users must actually copy the contents of a sheet and paste it, which is far from ideal. Those who confuse decentralization with slow speeds have nothing to worry about.
The replies were fast, and the chatbot never hung. But the models offered the quality you’d expect from open-source LLMs that are not really topping the charts in the LLM Arena—LLama 3.1 405b is currently in the 27th position and is the most powerful model in Presearch’s roster.
It’s not bad, but it’s also not impressive by today’s standards.

There are currently some open-source implementations that are a lot more powerful at arguably similar sizes.
For example, Llama-3.1-Nemotron-70B-Instruct could easily substitute the newer (but not better) Llama-3.3-70b, and Deepseek R1 is leaps ahead of Meta’s Llama 3.1 405b, being the best open-source model to date.
Overall, the experience was pleasant; the models performed as expected, and the interface was easier to use than Venice AI, its main competitor.
If you are looking for a privacy solution or want to try every AI tool available today, this feature is definitely worth a look. Just take into consideration that the search engine won’t replace Google, and the AI chatbot won’t replace ChatGPT—at least not yet.
Edited by Josh Quittner and Sebastian Sinclair
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Source link

President Trump Signs Executive Order to Establish Bitcoin Reserve, Crypto Stockpile

Binance Stablecoin Reserve ATH, Here’s Implication For Bitcoin
Texas Surges in U.S. States’ Race to Put Public Funds Into Crypto, Bitcoin (BTC)

Crypto voters could swing New Jersey gubernatorial race

Mt. Gox Stirs The Market With $1 Billion Bitcoin Transfer

Texas Strategic Bitcoin Reserve Bill Passes The Senate

President Nayib Bukele Says El Salvador Will Continue Accumulating Bitcoin Despite IMF Pushback

step by step guide by a16zcrypto

What is a Strategic Bitcoin Reserve? How Nation States Could Hold BTC

How High Can Pi Network Price Get if Bitcoin Rises to $200k?

CORZ Shares Drop 15% Pre-Market as MSFT Cuts CoreWeave Commitments

Cointelegraph Bitcoin & Ethereum Blockchain News

XRP Bulls Set Their Sights On $222—Can It Happen?

Bitcoin Blockchain Used To Secure Results Of Tennessee County’s Republican Convention Vote

Analyst Jason Pizzino Issues Ethereum Alert, Says ETH Looking Very Weak – Here Are His Downside Targets

Arthur Hayes, Murad’s Prediction For Meme Coins, AI & DeFi Coins For 2025

Expert Sees Bitcoin Dipping To $50K While Bullish Signs Persist

Aptos Leverages Chainlink To Enhance Scalability and Data Access

Bitcoin Could Rally to $80,000 on the Eve of US Elections

Institutional Investors Go All In on Crypto as 57% Plan to Boost Allocations as Bull Run Heats Up, Sygnum Survey Reveals

Sonic Now ‘Golden Standard’ of Layer-2s After Scaling Transactions to 16,000+ per Second, Says Andre Cronje

Crypto’s Big Trump Gamble Is Risky

Ripple-SEC Case Ends, But These 3 Rivals Could Jump 500x

A16z-backed Espresso announces mainnet launch of core product

Has The Bitcoin Price Already Peaked?

Xmas Altcoin Rally Insights by BNM Agent I

Blockchain groups challenge new broker reporting rule

Trump’s Coin Is About As Revolutionary As OneCoin

Ripple Vs. SEC, Shiba Inu, US Elections Steal Spotlight

Is $200,000 a Realistic Bitcoin Price Target for This Cycle?
Trending
- 24/7 Cryptocurrency News4 months ago
Arthur Hayes, Murad’s Prediction For Meme Coins, AI & DeFi Coins For 2025
- Bitcoin2 months ago
Expert Sees Bitcoin Dipping To $50K While Bullish Signs Persist
- 24/7 Cryptocurrency News2 months ago
Aptos Leverages Chainlink To Enhance Scalability and Data Access
- Bitcoin4 months ago
Bitcoin Could Rally to $80,000 on the Eve of US Elections
- Bitcoin4 months ago
Institutional Investors Go All In on Crypto as 57% Plan to Boost Allocations as Bull Run Heats Up, Sygnum Survey Reveals
- Altcoins1 month ago
Sonic Now ‘Golden Standard’ of Layer-2s After Scaling Transactions to 16,000+ per Second, Says Andre Cronje
- Opinion4 months ago
Crypto’s Big Trump Gamble Is Risky
- Price analysis4 months ago
Ripple-SEC Case Ends, But These 3 Rivals Could Jump 500x