LLMS
Broken English Is Better Prompt Engineering
A 2025 study found Polish outperforms English in long-context LLM tasks. Ukrainian grammar instincts explain why — and how anyone can prompt better.
Why Your LLM Asks Questions (And Why You Should Too)
How RLHF trains models to seek clarification instead of guessing — and a four-agent pipeline that brings the same discipline to your own requirements.
Context Engineering: The Skill That Actually Matters
Context engineering — not prompt engineering — is where the real leverage lives. The discipline of deciding what goes into the context window, and …
Beyond the Chat Window: UX on Steroids with LLMs
Chat is the laziest AI integration. The real opportunity is components that remember users and act on context they've already provided.
Fixing Agent Amnesia With Federated Memory
A-MEM's Zettelkasten-inspired approach to agent memory, and why software projects need federated memory with cross-domain linking — not one big vector …
What Is AI-Native?
AI-native is more than a buzzword. It marks a real architectural shift where AI becomes the foundation, not a feature bolted on.
The AI Knows Your Client's Phone Number (And So Does Everyone Else)
RAG pipelines leak PII through prompt injection. A substitution layer swaps real data for realistic fakes before it reaches the LLM, neutralising the …
A CRM That Knows What It Doesn't Know
Probabilistic lead scoring with Pyro replaces brittle point systems with honest uncertainty, email fatigue modelling, and self-improving predictions.
Can AI Create Its Own Programming Language?
What would a programming language designed for AI look like? The thought experiment reveals deep unsolved problems and points to probabilistic …
Building a Graph-Based Intent Modelling Tool
A proof-of-concept tool for graph-based system modelling — defining system behaviour, validating it structurally, and generating tests from the …
Adopting System Models Incrementally
Practical guide to adopting graph-based system models incrementally. Covers escape hatches, LLM workflows, and week-by-week adoption strategy.
The Self-Validating Graph
Graph-based system models validate themselves through structural checks, semantic analysis, and automatic test generation from invariants.
Code Is a Lossy Format
Source code is a lossy format that discards intent. LLMs expose this weakness. Graph-based models preserve meaning as first-class structure.
When AI Surprised Its Creators: The GPT-2 Story
How GPT-2's simple language prediction training led to unexpected capabilities like translation and reasoning that surprised its creators.
How to Build AI Employees That Run Parts of Your Company
Learn how to build AI assistants that act like real employees using SOPs, LLMs, and automation frameworks like CrewAI and AutoGen for business …