Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Big Brain AI
Learn to not get left behind when AI takes over
Jonathan Ross, Founder and CEO of AI chip company Groq, offers a contrarian view: AI won't destroy jobs, it will create a labour shortage.
He outlines three things that will happen because of AI:
First, massive deflationary pressure.
"This cup of coffee is going to cost less. Your housing is going to cost less. Everything is going to cost less."
He explains this will happen through robots farming coffee more efficiently and better supply chain management, meaning people will need less money.
Second, people will opt out of the economy.
"They're going to work fewer hours. They're going to work fewer days a week, and they're going to work fewer years. They're going to retire earlier because they're going to be able to support their lifestyle working less."
Third, entirely new jobs and industries will emerge.
Jonathan points to history as evidence:
"Think about 100 years ago. 98% of the workforce in the United States was in agriculture. When we were able to reduce that to 2%, we found things for those other 98% of the population to do."
He continues:
"The jobs that are going to exist 100 years from now, we can't even contemplate."
Software developers didn't exist a century ago. In another century, they won't exist either, "because everyone's going to be vibe coding."
The same applies to influencers, a career that would have been unthinkable 100 years ago but now earns people millions.
His conclusion: deflationary pressure, workforce opt-outs, and new industries we can't yet imagine will combine to create one outcome...
"We're not going to have enough people."
292
NVIDIA CEO, Jensen Huang, warns that America's lead in AI is far from secure.
He breaks down the US-China AI competition into what he calls a "five-layer cake."
And while the US dominates some layers, Jensen sees critical vulnerabilities in others...
1) Energy:
China has twice as much as the US despite a smaller economy. Which "makes no sense" to Jensen.
2) Chips:
The US is "generations ahead," but Jensen warns against complacency. "Anybody who thinks China can't manufacture is missing the big idea."
3) Infrastructure:
Standing up a data center in the US takes about three years. In China? "They can build a hospital in a weekend."
4) Models:
US frontier models are "unquestionably world class," but "China is well ahead, way ahead on open source."
5) Applications:
Public sentiment differs sharply. Ask both populations whether AI will do more good than harm, and "in their case 80% would say AI will do more good than harm. In our case, it'd be the other way around."
Jensen's warning is clear.
Leading in chips and frontier models isn't enough when you're behind on energy, infrastructure speed, open source, and public trust.
Winning the AI race requires strength across the entire stack, and right now, the US has work to do.
138
In 1956, humanity reached a turning point nobody talks about.
10 scientists gathered in college apartments to chase an idea the world called "science fiction."
This was AI's Big Bang moment:
Late summer in 1955...
John McCarthy (a professor of mathematics at Dartmouth College) assembled an all-star team:
• Marvin Minsky - cerebral Harvard mathematician
• Nathaniel Rochester - IBM's practical engineer
• Claude Shannon - information theory genius
• Plus 6 other researchers
Together, they drafted a research proposal for a two-month summer workshop at Dartmouth College.
Their audacious goal?
Prove that machines could simulate every aspect of human intelligence — learning, reasoning, language, problem-solving.
McCarthy coined this radical new area of research:
"Artificial Intelligence."
It was the birth moment of AI.
But finding anyone to fund this research would prove to be brutal ↓
McCarthy requested $13,500 (about $159,000 today) from potential funders.
The response? Rejection after rejection.
Organizations couldn't grasp the implications. The idea of machines "thinking" was too radical, too philosophical, too uncertain. The field didn't even exist yet - how could anyone justify investing in it?
Finally, after months of persistence, Rockefeller agreed to fund the project.
But there was a catch:
They'd only invest $7,500 ($88,000 today).
Barely enough to cover the sides.
June 1956, the research began ↓
11 scientists gathered in college apartments and rented inns for 6-8 weeks.
"At the time I believed if only we could get everyone together to devote time to it, we could make real progress," McCarthy reflected.
The technology they had was extremely primitive, making it nearly impossible to test theoretical ideas.
But the challenge went even deeper than hardware ↓
Nobody could agree on what "intelligence" actually meant.
Was it problem-solving?
Reasoning?
Language?
Learning?
The group wrestled with fundamental questions that had no answers.
• Minsky later admitted that they "realized intelligence wasn't a neat puzzle but a sprawling wilderness."
• One participant described it as "programming machines to learn was like raising a child with a broken toolset."
When the summer ended in 1956, so did their work.
They left with:
No prototype
No consensus
No breakthrough
But something far more important happened ↓
They created the foundation for an entire scientific field of research.
Every AI system today — from ChatGPT to self-driving cars — traces its lineage to those 6 weeks in New England.
Summer 1956 produced no working "thinking machine." But it gave AI a name, a vision, and a community of renegade thinkers willing to risk everything on an impossible dream.
That summer was AI's big bang moment.
And the shockwaves are still expanding today.
—
Thanks for reading!
Enjoyed this post?
Follow @RealBigBrainAI for more content like this.

146
Top
Ranking
Favorites