Basiliac
The decision layer for intelligent systems
A universal engine that generates interpretable, adaptive, and robust decision rules — for any domain where real-time decisions matter.
Why it matters
The deployment wall is the real problem
The most consequential systems humans have built do not operate as black boxes. Legal frameworks, medical protocols, engineering standards — the logic behind their decisions can be examined, challenged, and improved. You can point to the rule that drove the outcome. You can question it. You can improve it.
Modern AI has moved in the opposite direction. Capabilities have advanced rapidly, but reasoning is often opaque, difficult to audit, and hard to defend where accountability matters.
A model may detect. A model may predict. A model may recommend. But when the next step is an action with operational, financial, regulatory, or safety consequences — institutions need decision logic they can read, test, govern, and stand behind.
Organisations have invested heavily in AI and discovered that the hardest problem is not capability — it is deployment. Black-box systems cannot clear the bar in regulated, safety-sensitive, or operationally complex environments. Capability without deployability is not enough.
That is the gap Basiliac exists to close.
Our approach
Historical data informs. Live reality decides.
Most intelligent systems use historical data twice — once to generate candidates, and again to select among them. That second step is where brittleness enters.
Method 01
Hypothesis-driven
Expert forms a view, tests it, refines it. Slow, expert-dependent, prone to overfitting through iteration.
Method 02
Data mining
Generate a large candidate pool, filter on historical data. Selection bias is introduced in the filtering step.
Basiliac
Reality-arbitrated
Historical data constructs features. One rule is generated, deployed immediately. Live reality is the only arbiter.
At the heart of the engine is a single construct — the mechanism that produces each rule. This is where the real work lives. Everything else — deployment, survival, retirement — is architecture around it. The generation mechanism is not a search procedure or a sampling process. It is a disciplined construction, designed from the ground up to produce rules that are stable under conditions that historical evaluation alone cannot anticipate.
Domain-grounded feature engineering
Raw data, domain expertise, or upstream model outputs define the building blocks — ensuring every rule is anchored in what actually matters.
Single-rule generation
The engine produces one rule at a time, each born with the intent to deploy — not to be filtered, ranked, or compared against alternatives.
Immediate live deployment
Each rule enters live reality from the moment it is generated. No candidate pool. No selection exercise. No refinement loop.
Automatic retirement
Rules that fall below performance thresholds are removed. Those that survive compound into an ever-evolving, adaptive system.
The measure of success is not how many rules are generated. It is how many survive — and what they collectively know.
What makes it different
Not another black-box model
Basiliac is the decision infrastructure that allows powerful models to be used where trust, control, and auditability are non-negotiable.
Interpretable
Every decision is traceable to explicit, human-readable logic. Not a probability. Not a score. A clear statement of what conditions drove the outcome.
Adaptive
The rule layer evolves continuously as conditions change — without waiting for manual retraining cycles. Rules that stop working are retired automatically.
Reality-tested
Rules retain their place through live deployment, not historical selection. The validation is the deployment.
Where it applies
Any environment where decisions must be auditable
Basiliac's inputs are features — structured representations of the state of the world in a given domain. Those features may come from raw data and domain expertise, or from the outputs of modern AI systems. In that configuration, Basiliac becomes the decision layer above perception: turning model outputs into deterministic, auditable action.
These are not edge cases. They are some of the most consequential systems in the world — environments where black-box outputs alone are insufficient, and where trust must be designed into the architecture.
Why now
Four forces converging
Regulatory pressure is intensifying
The EU AI Act, FDA guidance on AI in medical devices, and SEC scrutiny are converging on one requirement: decisions must be explainable.
The deployment wall is real
Organisations have invested heavily in AI and hit a structural barrier. Black-box systems cannot clear the bar in regulated or safety-sensitive environments.
Edge environments demand lightweight logic
Simple rules run anywhere — on sensors, network switches, vehicle ECUs, and embedded systems — without GPUs or cloud infrastructure.
The stack is maturing
Enterprises are distinguishing between systems that generate insight and systems they can trust to act. Perception is largely solved. The decision layer is not.
About
Who we are
Basiliac was founded by a systems builder and quantitative researcher with more than two decades of experience designing, building, and operating real-time decision systems in live financial markets.
Financial markets are among the most demanding environments in which decision logic can exist.
Working through those conditions over many years forced the development of a particular discipline: how to build rules that remain stable across regime changes, adapt without human intervention, and remain interpretable under pressure.
That experience led to a broader realisation. The underlying challenge was never unique to finance. Many of the world's most important systems face the same structural problem: real-time decisions, under uncertainty, in changing environments — while remaining understandable to the humans responsible for them.
Those foundations became the starting point for a much larger question: what would it look like to build a universal engine for interpretable, adaptive, and robust decision-making — one that could operate across any domain where decisions carry consequence?
Basiliac was founded to answer that question. We believe the next wave of AI progress will be measured not by capability alone — but by whether intelligent systems can be made legible, governable, and worthy of trust. That is the problem worth building for.
Contact
Get in touch
We are looking for a small number of pilot partners in regulated or high-impact domains. If you own AI risk, compliance, or production ML — we would like to hear from you.
Follow on LinkedIn