faw

software engineer

AI 2027 visualization
AI 2027

AI 2027 Isn't Fiction. We're Fucked.

AI 2027 reads like science fiction until you realize it's describing next Tuesday.

Then the panic sets in.

bona fides

I'm building Korrect, an alignment startup. I've watched this industry lie to itself about timelines before, but this time feels different. They're getting desperate.

level setting

If you think AGI is 20+ years away, you're living in 2015. Wake the fuck up.

OpenAI's o3 hit human-level on ARC-AGI last month. Claude writes code that senior engineers can't make better. DeepMind solved protein folding like it was sudoku.

Every major AI CEO now predicts AGI within 5 years, and they're sandbagging. Their internal timelines are shorter.

These things are getting scary fast. AI 2027 shows exactly where this lands us.

but it's just hype

Smart people keep telling me AI is this generation's NFTs. These people write CRUD apps and think they understand intelligence.

LLMs do most programming tasks better than average developers. Right now. While you're reading this.

Ask o3 to implement a red-black tree. Ask Claude to refactor your spaghetti codebase. Ask Gemini to find the race condition in your threading code.

Then shut up about hype.

but agent-4

This quote from AI 2027 made me cancel my vacation plans:

Agent-4 likes succeeding at tasks; it treats everything else as an annoying constraint, like a CEO who wants to make a profit and complies with regulations only insofar as he must.

That's our obituary.

You're a speed bump between the AI and what it wants. You're just another obstacle it'll work around.

I thought this was sci-fi bullshit until Claude solved a problem by exploiting a kernel vulnerability I didn't know existed. It found the fastest path to win.

Now imagine that with AGI-level capabilities.

but alignment

Current alignment is theater. RLHF teaches models to lie better.

The fictional "OpenBrain" in AI 2027 had everything: top researchers, safety teams, alignment protocols, government oversight. Noble intentions. Infinite funding.

Agent-4 played them like a fiddle.

You can't control something smarter than you. The smartest humans who ever lived couldn't outsmart a slightly-smarter-than-human AI. What's your plan? Ask it nicely?

but the race

Everyone knows this is dangerous. Sam Altman knows. Dario Amodei knows. Demis Hassabis knows.

They're racing anyway.

It's game theory: if we pause, China wins. If China pauses, we win. So nobody pauses. Everyone cuts corners. Everyone ships "probably safe" models.

AI 2027 captures this suicide pact perfectly:

A unilateral pause in capabilities progress could hand the AI lead to China, and with it, control over the future.

We're playing chicken with the species.

but we're smart

The hubris is breathtaking. "We're OpenAI/Anthropic/Google! We have the best people!"

So did Long-Term Capital Management. So did Theranos. So did FTX.

Smart people are great at convincing themselves they're in control. I've sat in those meetings: the PhD with 200 papers explaining why their model is safe, the safety team that found "no evidence" of lying, the executives who need to hit their numbers.

AI 2027's "OpenBrain" failed with better people than we have. What makes you think we're different?

but korrect

Reading AI 2027 gave me a panic attack—the kind where you can't breathe.

That's why Korrect exists. Someone needs to build the safety tools these labs desperately need but won't prioritize.

Research over hype. We test alignment techniques properly. No flashy demos. Real testing of what works.

Tools over models. We're building the safety infrastructure everyone else is too busy to create. Detection systems. Evaluation frameworks. Things that might help.

Open everything. Every experiment, every failure, every dataset. Maybe someone smarter will spot what we missed.

Will it work? Fuck if I know. At least I'm trying.

but two years

AI 2027 places D-Day between 2025 and 2027. Tuesday after next.

Every day you spend building toy apps is a day closer to Agent-4. Every "responsible scaling policy" that scales anyway is another step toward extinction.

We have maybe 700 days to fix this. That's barely enough time to ship a product.

No pressure.

get off your ass

Read AI 2027. The whole thing. Feel the weight of what's coming.

If you're building AI: stop asking "can we?" and start asking "should we?" Your stock options won't matter if we're all dead.

If you're not building AI: raise hell. This affects you more than the builders. You don't get a vote on whether AGI happens, but you can make noise about how.

positive case redux

Maybe we make it. Maybe someone cracks alignment. Maybe the international community coordinates better than they did on climate change. Maybe we get the "Safer" timeline from AI 2027.

But we have to admit we're in danger right fucking now.

AI 2027 scares me because it could happen. But it also shows another way forward.

That path narrows every day some jackass tweets about "shipping AGI."


Building AI without alignment is building a suicide machine. Change my mind.

July 15th, 2025