- Innov8 AI
- Posts
- The Only AI Company That’s Not (Openly) Evil
The Only AI Company That’s Not (Openly) Evil
PLUS: Latest AI Updates | L8R by Innov8

ടോപ്പ് AI മൈൻഡ്സ് എല്ലാം ചേർന്ന് ഒരു പുതിയ കമ്പനി ബിൽഡ് ചെയ്യുവാണ്
Avarkk oru lekshyame ollu..
a superintelligent AI that’s truly safe..
Welcome to Safe Superintelligence Inc. (SSI), founded by Ilya Sutskever (ex-OpenAI), Daniel Gross, and Daniel Levy.
They’re aiming big.
They’re promising pure focus.
They’ve already rejected billions from Meta.
But can they really pull this off?
ഇവര് ശെരിക്കും നല്ലവരാണോ?
I'm Alex, Welcome to L8R by Innov8.
Let’s DIVE DEEP 🕵️♂️ 👇️
In today’s post:
💀 Safe Superintelligence (SSI) Deep Dive🔥
1️⃣ What Does "Safe Superintelligence" Really Mean?

SSI says they’re building an AI smarter than humans—one that will never harm us.
In just one year:
✅ $30 billion valuation
✅ Rejected Meta’s $32B buyout
✅ Laser-focused mission: No products, no distractions, just safe AGI
But is this real?
But what does “safe” mean in real life ( AI) ? So far:
No research papers or public safety methods
No technical details on how safety is measured
OpenAI ithu nerathe paranjitullathanu.. pakshe ippo avaru route maatti.
2️⃣ Can SSI Stay Mission-Focused or Will Money Win?
They say: "No products, no side missions—just superintelligence."
But:
Investors have already pumped in billions
Meta offered $32B to buy them—refused (for now)
Daniel Gross (co-founder) left and joined Meta’s AI lab
Why It Matters:
History shows that even idealistic labs eventually chase products and profits. Can SSI resist the pressure?
What’s Next:
Future funding rounds, leadership changes, or new hires from Big Tech could test their resolve.
Is SSI More Trustworthy Than OpenAI’s Early Days?…
From our Partners
HR is lonely. But it doesn’t have to be.
The best HR advice comes from those in the trenches. That’s what this is: real-world HR insights delivered in a newsletter from Hebba Youssef, a Chief People Officer who’s been there. Practical, real strategies with a dash of humor. Because HR shouldn’t be thankless—and you shouldn’t be alone in it.
3️⃣ Is SSI More Trustworthy Than OpenAI’s Early Days?

Ilya Sutskever helped build OpenAI—and watched it drift from transparency to corporate secrecy.
Now at SSI, he’s promising a return to safety-first principles. But:
Sutskever’s own leadership at OpenAI wasn’t always transparent
SSI is a for-profit company, just like OpenAI became
So far, there’s been zero public sharing of research
4️⃣ Is "No Product Until Superintelligence" Even Practical?
SSI says they won’t release anything until their final, safe superintelligence is ready. No demos, no smaller AI models.
But:
Can they really keep investors happy without making money for years?
Will top researchers stay if there’s nothing public to show?
Can they test superintelligence without public feedback?
Why It Matters:
Staying this pure is rare. It sounds great—but without products, they risk running out of steam, money, or relevance.
What’s Next:
Watch for secret partnerships, private demos, or quiet monetization. Even the most focused labs need milestones to survive.
5️⃣ Can Big Tech Still Squeeze In?

Meta already tried to buy SSI. When that failed, they hired co-founder Daniel Gross and SSI’s investor Nat Friedman.
Also:
Google Cloud is providing SSI’s AI compute—friend today, but power tomorrow?
Big Tech rarely gives up on companies they want
Why It Matters:
Staying independent is tough in the AI wars. Big Tech has money, compute, and long patience. SSI could face new acquisition attempts, investor pressure, or soft influence through cloud deals.
🚨 Quick L8R Summary
Founded: June 2024 by Ilya Sutskever, Daniel Gross, Daniel Levy
Mission: One goal: safe superintelligence—no side products
Funding: $3B total ($1B in Sep 2024, $2B in Mar 2025)
Valuation: ~$30B
Team: ~20 researchers, locations in Palo Alto and Tel Aviv
Key Players: Meta, Google, OpenAI, Anthropic, xAI competing in this space
Big Moves: Meta tried to acquire SSI, failed; co-founder Gross moved to Meta
Current Focus: Pure R&D, zero product releases, building compute scale with Google Cloud
Safe Superintelligence sounds like the hero story we all want—but hero stories get complicated.
Real safety proof evide?
Money and market pressure resist cheyyan kazhiyumo?
Big Tech influence avoid cheyyan patumo?
For now, SSI stands apart (safest for now). But in the AI arms race, staying apart is the hardest thing to do.
Ithrollu innathe AI Update.
appo adutha l8ril varam.. bie.
Innathe L8R engane ondarunnu? |
Join our AI Creator Course - here
