What is Causal AI ?
A new generation of intelligent machines that understand “cause and effect” – a major step towards a true trustworthy AI.
Causal AI systems combine multiple predictive and causal models, understand the business context and manage uncertainty, enabling AI systems to solve complex analytical challenges.
Human rationalize the world by thinking in terms of cause and effect — if we understand why something happened, we can change our behavior to improve future outcomes. Statistics describe the world but are limited help to engage to actions; causal AI distinguishes correlation and causality to become a tremendous help to understand the world and make accurate decisions.
Our R&D team has deep roots in Causal AI science. We develop a state of the art Causal AI core technology based on Bayes Nets and our in-house built C++ algorithm acceleration toolbox.
Our product line Projector™ is designed to make building and using our Causal AI technology incredibly simple through an intuitive graphical user interface and a collaborative approach.
Optimized Bayes Nets
Solid scientific foundation for Causal AI
Accelerated model training
Train with hundreds of variables with no performance bottlenecks
Understand the complexity, Control the AI model parameters, Explain graphically the inference decisions
Software is increasingly multi-dimensional and complex. Many scientific applications such as computational finance are numerically intensive and require high-performance computing.
Of course, high level abstracted tools and languages are easing our life. But it is at the expense of increasing low level complexity.
Achieving the best performance on a target hardware requires code optimizations by low-level software experts who perform a combination of compiler-based, library-based, and code rewrite optimizations implying hundreds or even thousands of design parameters to explore, and several weeks of development.
After this manual and iterative fine-tuning process, the code generated is not readable by domain experts, nor maintainable or portable to other hardware architectures. Any minor change in the numerical algorithm parameters will be required to restart the optimization process which drives teams to refrain from improving their code.
Our R&D team develops domain-specific algorithm acceleration toolboxes for compute-intensive financial applications. The toolbox is composed of 2 main modules:
– SCALGO: a full suite of high performance technology tools for software code acceleration on CPU/GPU consisting of a domain specific compiler, microservice-based platform, in-memory computing and low-latency parallel runtime with its development SDK studio.
– SCALIFAI: a robust AI/ML digital twin factory for replacing compute-intensive financial applications with fast and trustworthy AI/ML models, enabling massive computation acceleration and huge infrastructure cost reduction.
Smooth code optimization process
Code optimization takes minutes instead of months, so you can play it again as much as you want
Accelerate your compute-intensive applications
From x2 (code-based acceleration) to x1000 (ML-based acceleration)
Reduce hardware footprint
Divide number of servers and energy consumption
Several applications in the capital market industry (front to back) are facing performance challenges while racing toward the lowest latency and real-time processing.
Hence, some firms are expecting to increase their performance just by upgrading hardware or seeking a hybrid hardware/software architecture. But software does not scale seamlessly with hardware.
Simultaneous access to shared resources causes bottlenecks; they are unavoidable and exist in all software systems. They degrade the quality of the software, response times and stability of performance, they require to be identified and adjusted. Bottlenecks are the result of a combination of factors at different levels of the computer system; low level (multi-threading models), hardware structures (multi-core CPUs), up to the software architecture and the parallel programming model.
Some bottlenecks can be mitigated by increasing the hardware capabilities (more cores, more memory, more network) but these are sometimes expensive and do not solve the problems themselves: they just allow you to « live with » by over- sizing the infrastructure. How to prevent the suppression of some creates others, in an endless race?
Our R&D team is specialized in low-latency architectures and high-performance computing.
We developed a full C++ based SDK for building parallel fast streaming applications using the Reactive software principles and the Actor model. The core parallel runtime is portable and optimized for several multicore CPU architectures. The SDK comes with a rich set of DevOps tools such as a debugger, monitoring, deep telemetry, and profiler.
Our solution is targeted to mission-critical and latency-sensitive applications.
We use this technology internally to give a competitive edge for our products, and we also sell it on a case-by-case basis as a core technology for major clients in the financial industry
Increase throughput, reduce latency, deterministic jitter
Full utilization of multicore CPUs, lowest possible end-to-end latency, at 99.9%ile stability
In-memory and in-cache computing
Computation is done on closest data to the CPU eliminating performance bottlenecks
On multiple environments from Operating systems, languages, multicore CPU hardware
The resurgence of attacks on personal data highlights the vulnerability of infrastructures connected to the Internet. The insurance-finance sector is very sensitive: how to guarantee the confidentiality of data from collection to processing – including by remote third parties – respecting regulatory and commercial constraints.
Homomorphic encryption is a PET (Privacy-Enhancing Technologies) and a form of encryption allowing one to perform calculations on encrypted data without decrypting it first. The result of the computation is encrypted; when decrypted, the result is completely identical as it had been performed on the unencrypted data.
Homomorphic encryption can be used for privacy-preserving outsourced storage and computation. This allows data to be encrypted and out-sourced to commercial cloud environments for processing, all while encrypted. In the finance and insurance business, homomorphic encryption enables new services by removing privacy barriers inhibiting data sharing.
Despite being mathematically ready, the technology is not ready yet to be used by the public. This extreme security has a high cost in terms of performance and infrastructure, that can reach more than 1000 times.
Our R&D team is collaborating with CEA List to combine our algorithm acceleration technologies and their FHE (fully homomorphic encryption) compiler and PET expertise to make this technology affordable, thus enabling new innovative business models for the financial and insurance industry.
Our combined solution facilitates the building of ultra-fast privacy preserving algorithms (from simple complex machine learning algorithms), and the deployment of application codes. It makes better use of the available computing resources. The benefits of this solution are to reduce both energy consumption and application response time and therefore to limit the cost of use in the cloud.