AMI is built on over ten distinct AI disciplines — each solving a specific class of problem that ATM teams face. This document explains every technique: what it does, where it’s used, and why it qualifies as AI.
Ask questions in plain English. Get sourced, verified answers in seconds — no SQL, no IT tickets, no waiting.
Powers the AMI chatbot across five intelligence modes — Dashboard, Analyst, Forensic, Research, and Engineer — each tuned with a different temperature for the task at hand.
Large Language Models are deep neural networks trained on billions of tokens of text. They don’t follow hand-coded rules — they learn statistical patterns in language and use those patterns to generate human-quality text, reason over complex contexts, convert natural language to SQL, and produce structured analytical outputs.
The multi-provider architecture means AMI can run entirely on-premise (no data leaves your network) or use cloud models when permitted — and switch between them without changing the user experience.
The AMI forensic pipeline is a LangGraph state machine with five specialised nodes that work together autonomously:
A simple chatbot takes a prompt and returns a response. An AI agent decides what to do, not just how to answer. It breaks problems into steps, selects tools, executes a plan, evaluates its own output, and self-corrects.
This is goal-directed autonomous reasoning — the system pursues an objective across multiple steps without human intervention at each stage. It’s the difference between a calculator and an analyst.
Natural Language Processing is the branch of AI that enables machines to understand, interpret, and generate human language. In AMI, NLP converts messy transaction logs into structured data, classifies social media sentiment, and enables the entire “ask in English” experience.
Without NLP, every query would require SQL expertise. With it, anyone on the team can interrogate the data.
Model how cardholders choose ATMs, segment behaviour patterns, and predict demand shifts — before they happen.
The MNL model is calibrated on observed cardholder behaviour to predict which ATM a customer will choose. Seven utility coefficients are learned via Maximum Likelihood Estimation:
This feeds a 10-step demand prediction pipeline with bootstrap confidence intervals (500 iterations), maturation curves, and demographic adjustments.
Multinomial Logit is a probabilistic machine learning model. It learns from observed behaviour — the seven coefficients are not hand-coded by analysts but estimated from data using maximum likelihood optimisation.
The model captures the trade-offs cardholders actually make (convenience vs. proximity vs. site quality) and uses those learned trade-offs to predict behaviour at sites that don’t exist yet. This is the core of demand forecasting for new ATM deployments.
Clustering is unsupervised machine learning — the algorithm discovers hidden structure in data without being told what to look for. Nobody tells DBSCAN where home zones are; it finds them from transaction coordinates. Nobody defines the six mobility profiles in advance; K-Means discovers them from the data.
The entropy-based loyalty scoring uses information theory (a foundation of modern AI) to measure how predictable a cardholder’s behaviour is. High entropy = unpredictable = opportunistic. Low entropy = habitual.
Regression is supervised machine learning. The model learns a mathematical function that maps inputs (nearby POIs, population, footfall) to outputs (expected transaction volume) by minimising prediction error on training data.
LassoCV goes further: it automatically performs feature selection by penalising weak predictors to zero. From 196 possible features, the algorithm identifies the handful that actually drive demand. This is automated discovery — the machine finds what matters.
Score every potential site, map competitive landscapes, and predict competitor closures — deploy with confidence.
The Analytic Hierarchy Process (AHP) scores every location on six criteria:
Each criterion passes through a logistic S-curve (fuzzy membership function) before weighted aggregation. Validation: Spearman ρ = 0.63 against observed transaction volumes.
Fuzzy logic is a branch of AI that handles uncertainty. Instead of binary “good/bad”, fuzzy membership functions score “how much” a site meets each criterion on a continuous scale. The S-curve shape parameters (steepness k and inflection point) are not guessed — they’re optimised.
Differential evolution jointly optimises all 15 parameters (6 weights + 8 curve shapes + 1 spatial decay) over 500 generations. This is a metaheuristic optimisation algorithm inspired by biological evolution — candidate solutions compete, mutate, and recombine until the fittest survives.
Geospatial AI combines spatial data structures, mathematical models, and network algorithms to make location-aware predictions. The system reasons about space the way a human analyst would — “how far is this from competitors?”, “what’s the catchment population?”, “what’s the fastest route?” — but across thousands of locations simultaneously.
The IDW competition index and gravity model are both spatial decay functions with learned parameters. They don’t use hard distance cut-offs — they model the continuous, non-linear relationship between distance and influence.
The Cox Proportional Hazards model predicts when a competitor ATM is likely to close, using three learned covariates:
Output: closure probability per competitor ATM, classified into high/medium/low risk tiers. Falls back to Logistic Regression when fewer than 30 closure events are available.
Survival analysis is a branch of statistical machine learning that models time-to-event data. The Cox PH model learns which factors accelerate or delay competitor closures from historical data — it doesn’t use business rules or assumptions.
This gives your bank a predictive edge: know which competitor ATMs are likely to close before they do, and position to capture the displaced transactions. The model quality is measured by the c-statistic (concordance) — a standard ML model evaluation metric.
Detect transaction shifts, equipment failures, and unusual patterns automatically. Act on signals, not surprises.
STL uses LOESS — a non-parametric machine learning technique that fits local regressions across the data. It doesn’t assume a fixed mathematical form; it learns the shape of the trend and seasonality directly from the data.
CUSUM is a sequential analysis algorithm from the same family used in industrial quality control and financial fraud detection. It detects when a process has shifted — automatically, continuously, across hundreds of ATMs, without a human reviewing each chart.
When an ATM’s monthly transactions deviate significantly (|z| > 2.0), the system classifies the anomaly through a cascading hypothesis test:
Each classification triggers a different operational response.
Anomaly detection is a core AI discipline. But detecting an anomaly is only half the problem — the other half is explaining it. AMI’s cascading classifier doesn’t just flag outliers; it tests hypotheses in sequence, cross-referencing multiple data sources (transactions, competitor events, seasonal patterns) to assign a root cause.
This is automated reasoning over multiple evidence sources — the same approach used in medical diagnosis systems and industrial fault detection.
Process mining is AI that discovers actual process flows from event logs. It doesn’t rely on documented procedures — it reconstructs how the ATM actually behaves versus how it should behave.
The heuristic net algorithm automatically identifies the most likely sequence of events, filtering noise and highlighting deviations. This is automated pattern discovery from unstructured operational data — finding the needle in millions of EJ log entries.
Every AI answer is grounded in your actual data. The system searches, verifies, and cites — it does not hallucinate.
RAG combines two AI disciplines: information retrieval (search) and generative AI (LLMs). Instead of relying solely on what the LLM learned during training, RAG searches your actual documentation and feeds relevant passages to the model.
This grounds every answer in real, current information. The LLM knows your table names, column definitions, and business rules — because it just read them, not because it memorised them months ago.
Embeddings are neural network representations that capture semantic meaning. “Show me failing ATMs” and “list underperforming machines” map to nearby vectors even though they share no words — because the embedding model learned that they mean the same thing.
This is learned semantic understanding, not keyword matching. The system finds relevant examples based on meaning, not string overlap — which is why it works even when users phrase questions in unexpected ways.
Every answer passes through a four-stage verification pipeline:
Self-verification is a hallmark of advanced AI systems. The Critic node is an independent AI reviewer — it evaluates the work of other AI components using different criteria than those that generated the output.
This is AI checking AI. The generator optimises for helpfulness; the critic optimises for correctness. The tension between these two objectives is what prevents hallucination and ensures every answer is grounded in real data.
Every technique, its AI discipline, and the business problem it solves.
| # | Technique | AI Discipline | Business Problem Solved |
|---|---|---|---|
| 01 | Large Language Models | Deep Learning | Natural language access to all ATM data |
| 02 | Agentic AI | AI Agents | Autonomous multi-step analysis without human steering |
| 03 | Natural Language Processing | NLP | Parsing logs, classifying sentiment, understanding queries |
| 04 | Multinomial Logit | Statistical ML | Predicting which ATM a cardholder will choose |
| 05 | Clustering & Segmentation | Unsupervised ML | Discovering zones, mobility profiles, loyalty tiers |
| 06 | Regression & Feature Selection | Supervised ML | Identifying which location features drive demand |
| 07 | Fuzzy Logic + Optimisation | Decision Science | Scoring sites on multiple criteria with uncertainty |
| 08 | Geospatial AI | Spatial Analytics | Location-aware prediction across thousands of sites |
| 09 | Survival Analysis | Statistical ML | Predicting when competitor ATMs will close |
| 10 | Time Series Decomposition | Signal Processing | Separating trend from seasonality and noise |
| 11 | Anomaly Classification | Anomaly Detection | Explaining why an ATM’s performance changed |
| 12 | Process Mining | Process Intelligence | Discovering actual ATM behaviour from EJ logs |
| 13 | RAG | Information Retrieval + GenAI | Grounding AI answers in actual documentation |
| 14 | Semantic Embeddings | Representation Learning | Finding relevant examples by meaning, not keywords |
| 15 | Multi-Stage Verification | AI Safety | Preventing hallucination; AI checking AI |