Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The dream of artificial intelligence was first thought of in Indian philosophies dating back to 1500 BCE, like those of Charvaka that claimed direct perception, empiricism, and conditional inference as proper sources of knowledge. But serious attempts to the modern discovery of AI began only in the mid twentieth century when Alan Turing, an English computer scientist, mathematician, logician, philosopher and theoretical biologist, declared that If a machine could carry on a conversation that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”.
Although many different approaches were tried over the last six decades to give birth to thinking machines, the AI researchers failed to deliver on their promise, primarily due to the lack of computer power and amount of data.
Thanks to the relentless march of Moore’s law: the exponential growth in compute performance over the last fifty years and the attendant reduction in the cost of computing, and the advent of Internet and mobility, unleashing an explosion of data, the AI researchers in a short span of less than a decade have been able to dust off their shelved ideas of Neural networks, Deep Learning and Reinforcement learning to make AI interesting again.
Of the three-mainstream compute-hardware platforms, the Intel CPUs popular in PCs, Laptops, and Servers, ARM chips prolific in mobile devices including smart phones and pads and Gaming Chips, mostly from Nvidia, popular for very high-performance gaming, it turned out that the Nvidia gaming chips were a perfect fit for the deep learning algorithms. Nvidia capitalized on the opportunity and most AI researchers, mathematicians and computer scientist in search of a fast hardware platform, started relentless research in improving their algorithms for perfecting the art of deep learning.
AlphaiCs has built a custom hardware platform not just for more efficient and higher performance Supervised Deep learning, for Image and Speech classification using Perception, but also by combining Decision making ability using self-learning agents to enable strong AI needed for Real time autonomous systems such as Dialog Systems and ADAS, where milli-seconds matter. These supervised self- learning agents are delivering reinforcement learning today and will provide foundation for unsupervised learning in the future as AI evolves.
By developing a novel agent-based compute paradigm, AlpahICs is advancing AI to the next generation. While CPUs are primarily Scalar based, wherein a Single Instruction operates on a Single Data and GPUs are Vector based, wherein a Single Instruction operates on a “Linear Array” of data (called Vectors), AlphaICs Real AI Processor (in short RAPTM), is based on Agents, a group of interconnected Tensors, packing in a much more powerful yet efficient high dimension compute. The GPUs on the other hand, do not have the architecture to handle a divergence of threads needed for Reinforcement learning.
Furthermore, RAPTM has developed new specialized set of Instructions SIMATM (Single Instruction Multiple Agents) to minimize the overhead for increasing energy efficiency. A significantly high program control has been achieved through use of a very large number of special instructions. SIMATM enables Multiple Agents working asynchronously in groups, in different environments, bringing huge level of parallelism at Agent level thereby significantly increasing rate of learning.
AlphaICs has created a new AI compute solution to Deliver 3X to 15X better performance over the alternative from Nvidia, Graphcore and Intel and at less than half their price.