"Measure what you treasure"
Why Bast AI is redefining success through explainable neuro symbolic intelligence
Your data. Your ontology. Your intelligence. Your future.
“Our digital exhaust speaks volumes. Not just about what we’ve done—but who we are becoming.” This is how I would open the conversation - like a bull in a china shop, I would always try to jarr people awake and force them to understand this basic truth. I yearn for the day I can sponsor my own research on what we are becoming, but that is not today.
The crisis of measurement in AI
I have been using the chart above comparing successful versus unsuccessful people in my keynotes since 2017, and I can do an entire talk about why we need to measure what we treasure. Human beings are not all that "deep" as my kid would say—we are remarkably easy to manipulate into reward behavior. This fundamental truth about human psychology has never been more relevant than in today's AI landscape, where vanity metrics are driving astronomical valuations for systems that fundamentally fail to deliver meaningful human augmentation.
Benedict Evans recently highlighted this measurement crisis in AI, noting that "too many people are still asking consumers and enterprises questions like 'do you use AI?' or even 'have you used AI in the last year?'"1 The industry has become obsessed with token counts, weekly active users, and model parameters—metrics that tell us nothing about whether these systems actually improve human decision-making or create genuine value.
The problem runs deeper than poor metrics. As R.G. Collingwood argued in The Idea of History, our understanding of any phenomenon is inevitably shaped by the perspective of the observer, not the observed reality itself.2 When we measure AI success through the lens of traditional software metrics, we miss the fundamental question: Are we augmenting human intelligence or merely creating sophisticated automation that keeps humans in the dark about how decisions are made?
Why current "Neuro-Symbolic" AI falls short
You'll hear the term "neuro-symbolic AI" everywhere these days, especially from vendors who promise symbolic intelligence but deliver nothing more than pipelines that stitch together OCR, natural language processing, and machine learning. Companies claim to offer "neuro-symbolic AI" solutions, but their approach focuses primarily on clinical data processing and risk adjustment coding rather than true symbolic reasoning—their so-called "knowledge graphs" are just entity databases with no reasoning layer. They can extract codes. They can optimize billing. But they can't explain their reasoning process.
That's not symbolic reasoning. That's spreadsheet automation dressed up in AI marketing language.
True neuro-symbolic AI requires more than combining neural networks with rule-based systems. It demands the integration of symbolic reasoning with neural pattern recognition in a way that makes every inference transparent and traceable. Without this explainability, AI systems become what they fundamentally are: sophisticated control mechanisms pretending to be intelligent partners.
What human augmentation really means
Remember that history is the idea of the human who writes about it, not the civilizations they are writing about. Data is an artifact of human experience, and our digital exhaust speaks volumes—but only if you know how to listen and, more importantly, how to explain what you're hearing. Most AI systems today collect behavior in the background, reinforce surface-level patterns, and optimize for control. What's missing is context. What's missing is consent. What's missing is the human mental model on how information is received. When data is harvested without consent, humans change their behavior accordingly. Who else has been actively or even passively lying to systems that are just hungry for data ?
To augment someone is to help them see more clearly and reason more confidently. It is by definition the act of listening and understanding what someone does all day, that enables augmentation and a human that is willing to adopt the AI system. Without adoption we get command and control - aka “strategic outsourcing,” “labor arbitrage,” colonial systems of governance where slavery is means to the end.
What totally pisses me off is that there is a fantastic alternative narrative here - we can help humans learn faster by using AI systems that are explainable. Let me repeat that - we can help humans learn more quickly by relating any new information to their existing mental models. WHY we are not doing this today? Welp, because it is Tuesday in 2025 and the business models demand data not understanding. Her is what we are doing at Bast AI, we have built systems that:
Understands the user's language and mental framework
Adapts to their cognitive processes and learning style
Shows its reasoning for every answer and recommendation
Learns transparently from every interaction
Without that transparency, AI becomes just another control system pretending to be smart, harvesting data while keeping users ignorant of the decision-making process. Ask for your receipt for their use of your data! Ask people to show their work—ask them what frameworks are being used. I so enjoyed the "leaked system prompt" article that shows the programmed reasoning behind the "Controlled AI Personality."3
How Bast AI works: symbolic, explainable, and yours
Every Bast deployment begins with your data in your environment. We never harvest or train on anyone else's content. Instead, we install a fully controllable pipeline where:
Data is extracted and grounded in domain-specific ontologies that reflects your organization's unique knowledge structure
Every interaction is versioned and traceable, creating an audit trail of reasoning
Predictions are made using deterministic logic that can be examined and validated
The system runs behind your firewall or in your cloud container, ensuring complete data sovereignty
This gives organizations a private, explainable AI engine they fully own and can adapt to any role, setting, or literacy level. Unlike black-box systems that optimize for engagement metrics, Bast AI optimizes for understanding and genuine cognitive partnership.
Learning from language in real time
Bast AI was designed to learn from every interaction, capturing not just content but context—how confident users sound, what they already know, and where they might be confused. Inspired by Piaget's model of cognitive development, Bast reduces the period of disequilibrium—that moment when something doesn't make sense—by reshaping answers into formats the user can receive and build upon.
This isn't just personalization. It's cognitive alignment at the deepest level, where the AI system adapts its reasoning style to match human mental models rather than forcing humans to adapt to machine logic.
What transparency makes possible
When AI can explain its thinking, we stop asking it to prove itself—we begin to trust it. And trust changes what we can measure. With Bast, organizations can track:
How often patients truly understand their next steps, not just clicked "acknowledge"
How much cognitive load caregivers can redirect from administrative tasks to patient care
How students shift from confusion to comprehension, measuring learning velocity rather than completion rates
How decision-makers gain confidence in their reasoning, not just faster processing
You're not measuring efficiency. You're measuring growth. You're intentionally and transparently teaching human beings what success feels like, creating positive reinforcement loops that strengthen both human and AI capabilities.
You're not counting clicks. You're tracking clarity and cognitive development.
The future of AI is more human, not more machine
Recent research from Apple has shown that even frontier AI models like OpenAI's o3 and Anthropic's Claude "face a complete accuracy collapse beyond certain complexities" when dealing with simple puzzles like Tower of Hanoi.4 This isn't a temporary limitation—it's evidence that current AI approaches, despite billions in investment, are fundamentally limited by their opacity and inability to truly reason through problems.
Real neuro-symbolic systems are not optimized for prediction accuracy or engagement metrics. They are optimized for explanation and genuine understanding. They don't ask for trust—they earn it through transparency. They don't replace human judgment—they augment human reasoning capabilities.
Bast AI was built to show its work, respect the user's cognitive autonomy, and evolve with the organization—not against it. We build forests of AI models that can be examined and understood, not black boxes that optimize for mysterious objectives. We measure learning and cognitive growth, not surveillance and behavioral manipulation. We amplify what makes us human: our ability to reason, learn, and make conscious decisions based on understanding rather than algorithmic conditioning.
Ready to configure your own explainable AI?
The choice is clear: continue investing in black-box systems that optimize for vanity metrics and keep humans dependent on unexplained outputs, or configure your AI that genuinely augments your human intelligence through transparency and collaborative reasoning.
At Bast AI, we believe the future belongs to systems that measure what truly matters—not token generation or user engagement, but genuine cognitive partnership and human flourishing. Our neuro-symbolic approach ensures that every insight can be traced, every decision can be explained, and every interaction strengthens both human and artificial intelligence.
Your data. Your ontology. Your intelligence. Your future.
Ready to build reach out → hello@bast.ai or click the button above
Evans, Benedict. "AI's metrics question." Benedict Evans, June 9, 2025. https://www.ben-evans.com/benedictevans/2025/6/9/generative-ais-metrics-question
Collingwood, R.G. The Idea of History. Oxford University Press, 1946.
MKWriteshere. "Claude 4's Leaked System Prompt Exposes AI's Controlled Personality Deception." Towards AI, May 2025. https://pub.towardsai.net/claude-4s-leaked-system-prompt-exposes-ai-s-controlled-personality-deception-03ab59431b93
"Frontier AI Models Are Getting Stumped by a Simple Children's Game." Futurism, June 13, 2025. https://futurism.com/frontier-ai-models-stumped-childrens-game