🧙🏼‍♂️ Pizza with extra glue, please

Also: Copilot as a team member

in partnership with

nr 42 / Subscribe‎‎‏‏‎|Sponsor‏‎|Submit GPT|Top 2,500 GPTs

Howdy, wizards.

Google’s new AI search is recommending people to put glue on their pizza. It’s a sticky situation.

Here’s what’s brewing in AI this week:

  1. Google’s AI overview backfires hard. The difficulty in getting AI answers correct, at scale, is real.

  2. Elon raises $6 billion for xAI. xAI is a legitimate challenger to OpenAI.

  3. Microsoft expands Copilot to acts as a team member. Your team’s new coordinator is here.

  4. The latest on OpenAI’s ongoing drama. Sam Altman probably knew about the equity clawback thing.

  5. Anthropic maps their AI’s “brain”. We really need this kind of research.

  6. GPTs: top arrivals in the GPT store and on whatplugin.ai

This issue is brought to you by

The first AI-powered startup unlocking the “billionaire economy” for your benefit

It’s one of the oldest markets in the world, but until recently, the average person would never dream of investing in it. Until a Harvard data scientist and his team cracked the code with a system to identify “excess alpha.” 

The company that makes it all possible is called Masterworks, whose unique investment platform enables savvy investors to invest in blue-chip art for a fraction of the cost. Their proprietary database of art market returns provides an unrivaled quantitative edge in analyzing investment opportunities. 

So far, it's been right on the money. Every one of their 16 exits has been profitable, with recent exits delivering +17.8%, +21.5%, and +35.0% net annualized returns.

 Dario’s Picks 

1. Google’s AI overview backfires hard

Google recently introduced AI overviews, which provides users with an AI summarised answer to your search query, appearing above the “ten blue links” that we’re used to. The new feature is getting meme’d pretty hard due to the misinformation it provides, often drawing in bizarre, outdated comments from Reddit that were purely meant as humour.

Here’s some hilarious examples that’ve set social media abuzz in the last week:

Google is now scrambling to remove AI answers from specific, problematic searches. A Google spokesperson told The Verge that these types of answers are uncommon and in some cases fake, and that they’re taking swift action to improve their model.

Why it matters

Google has been testing AI overviews for over a year, which shows the difficulty in getting AI answers correct, at scale. It’s pretty clear that Google is facing competitive pressure from all sides to launch something new and improved, with Microsoft and OpenAI investing into AI search alternatives, and Gen Z embracing other channels for search, like TikTok.

I wouldn’t dismiss AI overviews despite its early-stage challenges. The vision here is multi-step reasoning; meaning, it will be able to go further than just giving you an answer, it would be able to take action. For now, though, I would take its advice with a big pinch of Elmer’s school glue.

2. Elon raises $6 billion for xAI

One of the biggest AI investments so far happened this week, with Elon Musk’s xAI raising $6b; they’re now valued at about 1/3 of OpenAI valuation (24b vs 80b). xAI’s visible achievements so far, after releasing the Grok-1 model in November last year, is open-sourcing it, and launching Grok-1.5 with long context capability and vision.

The funds will be used to take xAI’s first products to market, build infrastructure as well as R&D for future technologies.

Related: A couple of other large investment rounds that were recently announced. Scale AI raising $1 billion and Suno raising $125 million. Scale AI turns raw data into high-quality training data for LLMs and Suno is the leading AI music generator.

Why it matters

With the financial traction they now have, xAI is a legitimate challenger to OpenAI. However, Elon Musk has previously sued OpenAI for prioritising profit over its mission, and was a leading voice in calling for a pause in AI development last year. It will be interesting to see how he balances a commitment to openness and safety with profit incentives.

3. Microsoft expands Copilot to acts as a team member

Microsoft has announced a couple of cool things in the last week:

1) Real-time AI video translation on sites like YouTube, LinkedIn, Coursera and more in the Edge browser.

2) AI-powered copy paste. “Advanced Paste” allows you to paste text in formats such as plaintext, markdown, and JSON (you can describe the format you want it in). It does costs a little bit as it uses OpenAI API credits to achieve this.

3) Microsoft Copilot as team member. Microsoft is extending its functionality for Copilot beyond personal assistant to “working as a team member”, including as a meeting facilitator, group collaborator and project manager.

Why it matters 

The team Copilot feature seems pretty handy, although the framing of it as a “team member” might be a bit of a stretch. The new functionality generally allows it to do combine certain coordination tasks for which standalone apps already exist: taking meeting notes, surfacing information in group chats, staying on top of action items and assigning tasks.

4. The latest on OpenAI’s ongoing drama

OpenAI has faced a great deal of controversy on multiple fronts since its Spring Update two weeks ago. First, it’s the alleged cloning of Scarlett Johansson’s voice for ChatGPT’s “Sky” voice (now removed). Second, that key employees are leaving while citing neglect of security considerations in building AGI, notably chief scientist Ilya Sutskever and superalignment lead Jan Leike. Lastly, the equity clawback mechanism in their off-boarding contract that effectively silences dissent from former employees.

Here’s a recap of the latest, related developments:

Why it matters

OpenAI has a fantastic product – yet it’s hard to say where it’s all going with these scandals. It seems like both internal and public trust has taken a hit. That’s dangerous territory for any business.

5. Anthropic maps their AI’s “brain”

Anthropic has made some interesting progress in improving understanding of the inner workings of large language models – apparently a big step in improving safety mechanisms.

They’ve basically mapped how different concepts are represented inside the “brain” of their Claude Sonnet model. They mapped the patterns of neuron activation in the model for different concepts in the input text (e.g. “apple” and “freedom”), allowing them insight into how the AI processes the information. The research is considered a step closer to controlling its behaviour.

Why it matters

We’re on a fast track to developing intelligence greater than ours. We need funding, focus, and acknowledgement for this kind of research.

 GPTs 

Top newcomers on whatplugin.ai

Highly rated GPTs that made it into whatplugin.ai’s database in the last week. How rankings work.

whatplugin rank

GPT

Category

Avg. rating

#105

AMBOSS Medical Knowledge

Learning

4.3 (190)

#404

The Greatest Computer Science Tutor

Learning

4.4 (236)

#555

Book Writer GPT

Writing

4.5 (818)

 The wizard’s favourite AI newsletters 

what i’m reading right now

TLDR 💨 - essential tech news in 5 mins a day. Read by 1.2 million people.

Bagel Bots 🥯 - best hands-on tips & tricks

simple.ai - 🕵🏻‍♂️ - deep-dives on AI by Hubspot’s co-founder

The Neuron 😸 - easy weekday read on AI’s latest developments

*if you sign up for these newsletters I may earn a commission at no cost to you. If you do, THANK YOU, it will make it possible for me to continue to do this.

That’s a wrap for this week!

Fellow sorcerers – join me on LinkedIn.

Until next time,

Dario Chincha 🧙🏼‍♂️

What's your verdict on this week's email?

Login or Subscribe to participate in polls.

Past performance is not indicative of future returns, investing involves risk. See disclosures masterworks.com/cd