Quantum for AI
We believe machine learning will be the first area to be impacted by quantum computing and we are working hard to make this a reality.
Training AI neural networks is extremely compute and energy intensive, and puts us on an exponential cost curve as we chase greater AI accuracy.
Without intervention, estimates show that within the next ten years the power consumption of computing devices could reach 60% of the total amount of energy that will be produced globally.
AI has brought and will continue to bring immense benefits, but is fast becoming unsustainable.
Quantum artificial intelligence will fix that.
equal1.lab’s quantum machine learning platform offers a path to a cost effective solution
Higher Intelligence and a green footprint. A win-win
Our first demonstrator is called Alice – a rack-sized machine with an equal1-designed, fully integrated quantum processing unit with a 424 quantum dot array and all supporting control and read out electronics on-chip (link to technology page).
Yes, you read that right. We are the first company to have a working fully integrated QPU – for more details, see here
We are demonstrating the promise already – through experiments and measurements – reach out to hear more
For more details see here.
We have been working hard to demonstrate the promise of quantum artificial intelligence using our 2nd generation quantum processing unit
Our first two use cases are cell membrane wall detection and satellite imaging
Below we detail how we used quantum coherent noise from our machine to significantly improve the training of neural networks
Cell Membrane Wall Detection
The first example we choose is a small set of 30 consecutive 256_256-pixel monochrome images from a serial section transmission electron microscopy (ssTEM) dataset (ref) of the Drosophila first instar larva ventral nerve cord. Below illustrates two examples of the raw images and the corresponding cell walls which are the target for our semantic segmentation task.
The images are representative of real-world images: there is a bit of noise; there are image registration errors; there is even a small stitching error in one section. None of these would lead to any difficulties in the manual labeling of each element in the image stack by an expert human neuroanatomist. A software application that aims at removing or reducing human operation must be able to cope with all these issues.
We injected quantum coherent noise generated from our 424-quanutm dot array structure into a neural network to detect membrane walls. Employing this network to make predictions results in the improved dice coe_cient scores. Figures below show default prediction model trained with the unitary noise) verify the efficacy of the proposed technique. The clearest comparison can be seed in the last image in Fig. 12 with purple segmentation, where we put harder thresholds to create sharper imagery.
The second use case is (7), where satellite imagery could be used by humanitarian organizations (from www.crowdai.org). During a disaster, it would be extremely useful to map the impassable sections of road for example, as well as the most damaged residential areas, the most vulnerable schools and public buildings, population movements, etc. The objective is to adapt as quickly as possible the intervention procedures to the evolution of the context generated by the crisis. In the first days following the disaster, it is essential to have as fine a mapping as possible of communication networks, housing areas and infrastructures, areas dedicated to agriculture, etc. The images are available from a variety of sources, including nano-satellites, drones and conventional high-altitude satellites. Today, when new maps are needed they are drawn by hand, often by volunteers who participate in so-called Mapathons. The task is to produce intervention-specific maps with relevant features in a short time-frame and from disparate data sources using a machine learning solution.
The training data (see the examples in Fig. 13) contains individual tiles of satellite imagery in RGB format, and the labels (color segmentation super-imposed on the images) are the corresponding annotations of where an image exists. The goal is to train a model, which given a new tile (satellite image), can annotate all buildings. Even though the training dataset has 280,741 tiles (as 300_300- pixel RGB images), we restrict our testcase to only 0.5% of them in the spirit of accelerated training times.
From the trained model, we perform some predictions to see how these improvements come through visually. The results are illustrated below for the Gaussian noise (top) and for the unitary noise (bottom)
We are busy working on more use cases in medical imaging, materials science and others – please contact us to discuss your use case