The Great AI Convergence: Why I'm Buying GPUs

28.10.25 02:03 AM - By Chuck Orzechowski

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              Guest Blog Article                                                                                                                                                                                                                                                                                    By Sean Patterson

I'm watching something happen in real time that most people aren't talking about yet.

Every major AI company is racing toward the same feature set. Anthropic launches multimodal desktop integration. Two days later, ChatGPT releases a browser. Perplexity's had one for months. We keep leapfrogging each other, and honestly? It's all becoming the same thing.

Claude can operate directly on your desktop now. It's incredible. But here's what strikes me: I'd already built most of these capabilities for myself before the announcements dropped. Sure, the commercial versions are more polished and integrated. But what I built? Good enough for my use cases.... and in many ways better because it's my own understanding of the system that runs on my computer.

And that tells me something important about where we're headed.

The Only Real Moat Is the Frontier Period

The only truly proprietary advantage left in AI is how long model companies can maintain a frontier lead. That brief window where one lab has a meaningfully better base model than everyone else. But even that advantage is shrinking fast.

Here's the reality: if you care about security, if you need systems that are completely offline, open source models are good enough for most use cases now. Not all use cases, but most. And that changes everything about how we should think about AI infrastructure.

That's why I've started hoarding GPUs. Not because I'm paranoid about the future. Because I want control. I want to satisfy my own computing needs as an individual, as a consulting business, and when I'm helping companies implement AI. I want systems I can trust, running on hardware I own, processing data that never leaves my environment.

And if you needed another reason to own your own infrastructure, look at what just happened with AWS. The recent outage reminded everyone that even the most reliable cloud providers have single points of failure. When your AI capabilities depend entirely on someone else's infrastructure, you're one outage away from dead in the water.

The only AI you should truly trust is the AI that runs on your computer.

How I'm Building My Own System

I've changed how I work with open source solutions entirely. I used to clone GitHub repositories, try to integrate them, and then constantly deal with updates, breaking changes, and dependency hell.

Now? I assess the concepts and ideas in open source projects. I pull the specific ideas I want into my own bespoke system. That way it remains mine. I don't have to constantly change things when some dependency updates or a project pivots direction.

I'm cherry-picking at the end of the day. Taking the best ideas, the proven approaches, and building them into something that works exactly how I need it to work. It's more work upfront, but it's infinitely more stable and customizable long-term.

This approach works because these AI systems can help me understand and recreate the core logic of almost any open source tool. I don't need the entire codebase. I need the insight, the approach, the technique. And then I can build my own version that fits my specific requirements.

Why Physical Goods and Services Matter Again

I think we're heading toward a fundamental shift in how value gets created. Technology scaling has been the main driver of company growth for decades. But if AI commoditizes the technology work itself, what happens?

You end up with fewer massive tech companies and more small, efficient teams. The real moat at the end of the day becomes physical goods and services. Things AI can't easily replicate or deliver on its own.

And yes, there will be opportunities for people who are very good with AI. They'll be key critical people within companies. But here's the thing: companies won't need many of them. A small number of AI-capable people can leverage these tools to do what used to take entire departments.

I think we're going back to our roots in some ways. Before the industrial revolution, value came from physical goods and services delivered by skilled people. Unless you're part of the industrial or technological infrastructure itself, that's where we're headed again.

What Differentiates You Isn't Your Knowledge Anymore


This is the hard truth that knowledge workers need to face: your domain expertise alone doesn't differentiate you anymore. These AI systems can code well enough to recreate features just by looking at them or ideating around them. They can research, write, analyze, and synthesize at expert levels across most domains.

I suggest everybody really focuses on upskilling and learning their AI tools. If you don't have that capability, you don't have much going into the future.

The people who will provide AI solutions and support aren't going to be traditional employees. They'll be curious, highly flexible learners with multi-domain coverage. And most likely, they'll provide fractional support across many companies rather than full-time roles at one.

You already know this intuitively if you're paying attention. The job market is shifting. The skills that mattered five years ago aren't the skills that matter now. And the pace of that change is accelerating.

The Three-Layer Skill Stack You Need

Let me make this concrete. There's a progression to AI capability that's learnable, and knowing where you are helps you focus on what's next.

The Foundational Layer is about understanding how these systems actually work. Not building them from scratch, but knowing enough to make good decisions. You know you're moving past this layer when you can explain to a colleague why an AI gave a particular answer, or why it's struggling with a specific task. You understand context windows, training data limitations, and the difference between different model types.

The signal you're ready for tactical work is when you stop being surprised by what AI can and can't do. You've developed an intuition for the technology's boundaries.

The Tactical Layer is where you're building things. Custom GPTs, automation workflows, integrated solutions that save real time. You're not just using AI tools, you're architecting how they fit together. You're designing prompts that consistently get good results. You're connecting AI to your actual work processes.

You know you're ready to move into strategic territory when people start asking you to solve their AI problems, not just use the tools for your own work. When you can look at a business process and immediately see three ways AI could improve it, and know which one to actually implement.

The Strategic Layer is about knowing when NOT to use AI. It's advising on major implementation decisions. It's understanding organizational readiness, change management, and where the technology is mature versus where it's still too unreliable. You're thinking about competitive positioning, capability building, and how AI fits into broader business strategy.

This progression takes time. Foundational might be 2-3 months of consistent learning and experimentation. Tactical is 6-12 months of hands-on practice building real solutions. Strategic is 1-2 years of applied experience across different contexts and use cases.

But here's what matters: each layer has market value right now.

Foundational capability means you can use AI tools effectively in your current role. You're more productive than colleagues who haven't learned these systems. That's immediate value.

Tactical capability means you can build solutions that save 10-20 hours per week for a team. You're not just productive yourself, you're multiplying the productivity of others. That's consultable, billable value.

Strategic capability means you can advise on $100K+ AI implementation decisions. You're helping organizations avoid expensive mistakes and identify high-value opportunities. That's executive-level value.

The people who will thrive aren't the ones with the most AI knowledge. They're the ones who know which layer they're on and focus their learning accordingly.

The Domain Expertise Paradox

Here's where it gets interesting. I said your domain expertise doesn't differentiate you anymore. But I also said specialized domain expertise that requires years of context still matters. Both statements are true. You just need to understand which parts of your expertise are vulnerable and which parts are defensible.

Knowledge versus judgment. That's the distinction.

Knowledge is facts, procedures, frameworks. It's the stuff you can look up, the best practices you learned, the standard approaches to common problems. AI can replicate all of that. It can probably explain your industry's frameworks better than you can, because it has perfect recall and can synthesize across thousands of sources.

Judgment is different. Judgment is knowing which approach fits THIS specific situation. It's reading between the lines in a client conversation. It's understanding the organizational politics and history that explain why the technically correct solution won't actually work here. It's recognizing patterns from failures you've seen before that don't show up in any documentation.

AI doesn't have judgment. It has pattern matching at massive scale, but it doesn't have your ten years of watching initiatives fail in your specific organization. It doesn't know that the VP of operations and the CFO haven't spoken in six months, so any solution requiring their collaboration is DOA.

Context depth is your moat. And I don't mean general industry knowledge. I mean the specific, granular context of THIS company, THAT client, THESE stakeholders.

You know the unwritten rules. You know why the last three change initiatives failed even though they looked good on paper. You know which person's opinion actually matters in the decision, regardless of the org chart. You know that when this particular client says they want "innovation," they actually mean "make it look different but don't change how we work."

That kind of context takes years to build and can't be replicated by an AI that wasn't in the room for all those experiences. The knowledge worker who survives isn't the one with the most industry knowledge. It's the one with the deepest contextual understanding of their specific environment.

Domain experts who learn AI have an advantage over AI experts who learn domains. This is critical to understand.

You can teach someone to prompt an LLM correctly in a few weeks. You can teach them to build basic automations in a few months. But you can't teach them 10 years of healthcare compliance nuances, or supply chain failure patterns, or the subtle indicators that a client is about to churn.

If you're a domain expert who learns AI, you're combining deep contextual judgment with powerful new tools. That's a force multiplier.

If you're an AI expert trying to learn domains, you're starting from zero on the context and judgment that actually matters. You'll build technically impressive solutions that don't quite fit the real problem.

The domain expert has the harder-to-replicate asset. AI capability is the learnable skill that unlocks it.

Your role shifts from creation to curation. This is what expertise looks like in an AI-augmented world.

AI can generate the first draft, the analysis, the list of options. It can pull together information from sources you'd never have time to read. It can structure thinking and identify patterns across data sets.

But someone still needs to know what's actually right for this situation. What's missing from that analysis. What's technically correct but politically impossible. What will work in theory but fail in practice because of factors the AI can't see.

The expert becomes the quality filter and the decision-maker. You're not producing the raw output anymore. You're evaluating it, refining it, and making the judgment calls about what to actually do.

That's higher-level work than what most knowledge workers do today. It requires more expertise, not less. But it's a different kind of expertise than we're used to valuing.

The Learning Velocity Advantage

When I talk about curious, highly flexible learners with multi-domain coverage, people nod along. But what does that actually mean in practice? How do you develop that capability?

Cross-pollination is a skill you can practice. Learning across domains isn't about becoming an expert in everything. It's about recognizing patterns and applying concepts from one field to another.

I've used logistics concepts to redesign project management workflows. I've applied sales frameworks to internal stakeholder communication. I've taken manufacturing efficiency principles and used them to optimize service delivery processes.

The cognitive process looks like this: you learn a concept or framework in one domain. You abstract it to its core principle, stripping away the domain-specific details. Then you look for analogous situations in a different domain where that principle might apply.

For example, the supply chain concept of "buffer inventory" to handle demand uncertainty translates directly to "slack time in project schedules" or "redundancy in staffing plans." The underlying principle is the same: you're managing uncertainty by building in cushion where variability is highest.

The more domains you learn, the more patterns you recognize. And AI makes this dramatically easier because you can use it to help you understand the core principles of unfamiliar domains quickly, then test whether your cross-domain applications actually make sense.

Curiosity isn't just a personality trait, it's a practice you can cultivate. Some people are naturally curious, but anyone can become more curious through deliberate habits.

Systematic curiosity means having a method for exploring new domains. When you encounter something unfamiliar, instead of glossing over it, you stop and investigate. Not for hours, just for a few minutes. What is this thing? Why does it work this way? What problem was it designed to solve?

I keep a running list of concepts I don't fully understand. When I have 15 minutes, I pick one and dig into it. Sometimes with AI, sometimes with articles or conversations. The goal isn't mastery, it's familiarity. I want to build a mental map of adjacent territories.

The key is making curiosity sustainable. If you try to deeply learn everything that interests you, you'll burn out or become scattered. Instead, you're building breadth first. You're developing conversational competence across many areas, then going deep only where depth is strategic.

Adjacent domain strategy is how you prioritize learning. You can't learn everything, so you need a framework for choosing what to learn next.

Start with where you already are. What domains are one step away from your current expertise? If you're in finance, supply chain operations is adjacent. So is data analytics. So is regulatory compliance. These domains share concepts, stakeholders, and business processes with what you already know.

Learning graphic design is further away. It might be interesting, but it doesn't multiply the value of your finance expertise the way supply chain knowledge does.

The strategic question is: which adjacent domains, when combined with what you already know, create the most value? Usually it's domains that either feed into or receive output from your current work. Or domains that serve the same stakeholders you serve.

For me as someone who's operated as CRO, COO, and CTO, the adjacent domains that multiplied each other were organizational behavior, change management, and technical architecture. They all touched the same core problem: how do you actually implement technology change in complex organizations?

Build multi-domain coverage strategically, not randomly. Think about domains that compound each other's value. That's how you become irreplaceable, not by knowing one thing better than anyone, but by combining things in ways no one else can.

Why Small Companies Will Win

This shift toward AI commoditization doesn't just threaten existing business models. It creates opportunities for new ones. And I think small companies are positioned to win in ways that aren't obvious yet.

The coordination cost collapse changes everything. Large organizations have always had advantages in resources and scale. They can invest in big projects, weather market fluctuations, and access capital efficiently.

But they've also always had disadvantages. Coordination overhead, bureaucracy, slow decision-making, misaligned incentives across departments, communication breakdowns, political infighting. The larger the organization, the more energy goes into managing internal complexity rather than serving customers.

AI eliminates many of the tasks that used to require large teams. That means the coordination advantage of being small suddenly outweighs the resource advantage of being big.

A five-person team with AI can move as fast as they can think. No committees, no approval chains, no cross-departmental alignment meetings. Everyone understands the whole business. Decisions happen in conversations, not email threads. When you spot an opportunity, you can pivot immediately.

That speed and clarity is a massive competitive advantage when the technology itself is commoditized. Everyone has access to similar AI capabilities. The winners will be the ones who can deploy them fastest and most intelligently.

Niche specialization becomes the winning strategy. Instead of trying to be everything to everyone, which requires scale, small teams can go deep in specific niches.

AI handles the generalist work. The market research, the first-draft content, the routine analysis, the standard procedures. What humans provide is specialized judgment and relationship depth in narrow domains.

Picture this: a three-person firm that only does AI implementation for regional hospital systems in the Southeast. They know those systems intimately. They understand the specific regulatory environment, the common technology stacks, the typical organizational structures. They've seen the failure patterns. They know the key stakeholders at most of their target clients.

They can't compete with McKinsey on brand or resources. But they can run circles around McKinsey on speed, relevance, and practical implementation in their specific niche. And they can charge accordingly, because the value of specialized fit is higher than the value of general brand.

That's the future. Highly specialized boutique firms that combine AI leverage with deep domain expertise and strong client relationships. Clients will increasingly prefer them over large generalist firms, because they get better outcomes faster with less friction.

The ownership economics are compelling. Here's the math that matters: if 3-5 people with AI can do what used to take 30-50 people, and they own the business, the economics are extraordinary.

Instead of revenue being split among 50 people (with the bulk going to partners and shareholders), it's split among 5. Better margins, more control, and a direct connection between effort and reward.

Lower fixed costs mean you can weather downturns more easily. You don't have the overhead of maintaining a large organization. No HR department, no facilities management, no middle management layer. Just the people who actually deliver value to clients, leveraging AI to handle everything else.

This isn't about everyone becoming a solo entrepreneur. It's about ownership groups of 3-10 people building profitable, sustainable businesses without needing to scale to hundreds of employees.

For the people who position themselves right, this is a wealth-building opportunity. You're not climbing a corporate ladder hoping to make partner in 15 years. You're building equity from day one in a business with fundamentally better economics than traditional service firms.

Anti-fragile structures win in uncertain times. Small, AI-leveraged companies aren't just more profitable. They're more resilient.

Lower fixed costs mean less vulnerability to revenue fluctuations. You don't have massive payroll obligations that force you into survival mode when a few clients churn.

The ability to pivot quickly means you can respond to market changes before large competitors even finish their quarterly planning cycle. When you see an opportunity or threat, you can redirect resources immediately.

Less legacy infrastructure means fewer dependencies and single points of failure. Remember that AWS outage? Large enterprises dependent on cloud infrastructure were paralyzed. Small companies with owned infrastructure, or with diversified architecture, kept running.

Distributed rather than centralized risk. If you're one person in a 5,000-person company and your division gets cut, you're job hunting. If you're one person in a five-person firm and you lose a major client, you're part of the team figuring out the solution. Your fate is tied to the group's success, not to corporate politics three levels above you.

This anti-fragile structure matters more as the pace of change accelerates. The companies that survive won't be the biggest or the oldest. They'll be the most adaptable.

The Convergence Is Both Amazing and Terrifying

What amazes me is how quickly capabilities proliferate across the ecosystem. Something launches, and within days, everyone else has it. Most of this technology is built on open source foundations. The coding ability of these systems means they can recreate features by mimicking or iterating on what they see.

It's genuinely impressive to watch. But it also means differentiation through technology alone is becoming almost impossible.

What terrifies me is how many people aren't preparing for this shift. They're still operating like knowledge work will protect them. Like their expertise is a moat. It's not. Not anymore.

The competitive threat isn't AI itself. It's people who effectively leverage AI outperforming those who don't. And that gap is widening every single day.

What I'm Doing About It

I'm building my own infrastructure. I'm learning these tools deeply, not just using them. I'm thinking about how to combine AI leverage with things that can't be commoditized: relationships, physical presence, specialized domain expertise that requires years of context.

And I'm helping companies do the same thing. Not by selling them automation. By building AI capability through their people first, then automating based on what they learned. AI literacy before AI automation.

Because here's what I've learned: Technology adoption isn't something you can delegate. Leaders need to understand these tools themselves. They need to know what's possible, what's mature, and where to deploy strategically versus where to be patient.

Meet the technology where it's at. That's always been my approach, and it matters more now than ever.

Many of you are already building this capability in your own way. You're experimenting, learning, adapting. The fact that you're thinking about these questions puts you ahead of most people.


Connect with Sean here: https://www.linkedin.com/in/sean-patterson-ct/