Pre-RAP era of data dynamics
Internet age has resulted in data explosion. Data is generated by human activity. Great amount of processing power is used to process data generated by human activity. Society at large is a net generator of data. Large compute and storage farms in terms of social media, internet search engines, messengers and records are net consumers of data. Machine learning and especially deep learning has yet to prove good to society that is equivalent to data consumed from society. Thousands of hours of human labour are used to label the self-driving car data, for every hour of real time driving. As better safety and reliability are attempted, data requirement grows exponentially. In quest of excellence in automation, revenue from advertisements and machine intelligence, privacy is sacrificed. Yield from data usage from society is not as good as it is expected from society.
Figure 1 Machine learning or AI 1.0 (Weak AI) as net data consumer
Major application of AI 1.0 or weak AI is as shown in Figure-1. Current AI systems are net consumers of the data with little data to end user. Analytics companies provide good business insights using this model to business owners. Healthcare companies provide better insights and diagnostics using data from end user. Automotive companies are attempting to make driving safe using AI 1.0. There is a great advancement in user experience. Historically greatest benefits to society and end users resulted when society consumed more insights, data and knowledge. Current machine learning model of training and inference would not be able to provide this. Fully autonomous cars or Robots are still illusive. Most video analytics companies still struggle with GPUs to make temporal sense of video.
Abilene paradox and labelled data
Abilene paradox is defined in (McAvoy, J.; Butler, T. (2006)), in which “a group of people collectively decide on a course of action that is counter to the preferences of many or all of the individuals in the group. It involves a common breakdown of group communication in which each member mistakenly believes that their own preferences are counter to the group’s and, therefore, does not raise objections. A common phrase relating to the Abilene Paradox is a desire not to “rock the boat”. This differs from groupthink in that the Abilene paradox is characterized by an inability to manage agreement”.
Bringing in the context the Abilene paradox of the pre-RAP AI era, there are huge internet companies, e-commerce companies, data centres and information aggregators like google. These companies in the name of free service have become structured data repositories. They own 80% of world’s labelled data. Since they own data, they need to benefit from it and their customers need to benefit. So, start of building ML models to exploit petabytes of structured data began. So, these powerful internet companies started ecosystem to follow their exploits. Silicon companies danced to tunes. GPUs became darling for machine learning and deeplearning. These 200W silicon became superman of data processing. GPUs are meant for parallel processing the data. This gaming silicon became deep learning accelerators. They opened the applications in medicine, entertainment and data centres. Machine learning training workloads are done on GPUs. So, there were jobs created for image, text and speech labelling.
Autonomous AI system application, the Kryptonite for labelled data-based AI systems
Party abruptly stopped when deeplearning is applied to autonomous systems like self-driving cars. Two of the top AI system and silicon vendors for automotive systems have failed to ensure safety of the semi-autonomous cars. GPU vendor withdrew from testing of self-driving cars. At last Abilene paradox is falling apart and autonomous vehicles require new kind of compute. The deep technical reason for GPUs to fail in the autonomous systems are because of very architecture that used in gaming which can tolerate 100s of millisecond latency is pivoted to be real time system processor for a safety critical application like autonomous driving. In these safety critical new applications every millisecond matter. By very fundamental characteristic, GPU compute is meant to process the labelled data. But likely solution for autonomous systems is when AI systems control the data distribution. Typical autonomous systems require to learn by itself and enhance learning with some limits to be useful for most practical applications. Autonomous AI systems require to combine perception and decision making. Perception part is completely done by supervised learning in pre-RAP era. There is hardly any compute architecture that is good for decision making. GPUs are bad for conditional compute and hence can not be used for decision making applications. Most of decision making are hardcoded into software which are static. Most autonomous systems require dynamic computation graphs. ML models can change dynamically as the compute executes. Recently dynamic computation graphs are applied to RL based systems (Philipp Moritz, Dec 2017), Ray based distributed compute rely heavily on dynamic computation graphs. Ray tries to use GPU and CPU in combination and with lot of pain to achieve its sub millisecond latencies. However consuming 1000s of Watt of power in data centre.
Figure 2 Autonomous AI system application, the Kryptonite for labelled data system
Trouble in the cloud and datacentres
There is lot of pride in big internet companies, ecommerce companies and data centres that they own petabytes of labelled data. These data are enabling classification, detections and in some cases clustering. Decision making software consumes these data to provide decisions for various applications. We call this as “AI World 1.0”.
Following are the drawbacks of AI world 1.0.
- Huge labelled data requirement
- Huge efforts in labelling data
- Very low data efficiency e.g., every 1 hr of driving video require 800 hr of labelling
- Lower quality of decisions or complete absence of automatic decisions
- Highly optimized for perception, decisions are hardcoded into software written to postprocess
- Since it is net consumer of data, AI models are inward looking with lack of decision making and power to explore
- Sparse and Delayed actionable information or decisions
- Privacy, data theft, identity theft and safety issues
These drawbacks of data being stored at one place and actionable information being centrally sourced does not serve interest of customers and society. There should be more equitable exchange of information between various components of AI systems.
This brewing trouble in cloud and data centres calls for modern technology deployment, with different kind of compute in cloud. Exponential increase in labelled data requirement also creates suboptimal use of resources in data centres and an inferior quality of decision making. Increase in power consumption in data centres are also not environment friendly and contribute to global warming.
The whole reason for the state of affair in both edge and data centres is because both depend on labelled data systems. Inference on the edge and training and inference on cloud. In order to enable better decision making and autonomous AI systems, we need to move to a different kind of AI systems, So let all seek AI world 2.0!.
RAP-enabled AI world 2.0 and great data dynamics inversion of the century
Unlike GPUs, RAP is based on agent-based compute. The very basis of compute architecture is to enable AI systems which are net producers of actionable information, decisions, policies and decision enablers to be exchanged across the components of digital systems. By enabling digital system architecture which are equitable in terms of processing of actionable data, decisions and decision enabler, RAP enables AI world 2.0, which is smart, safe and equitable. AI world 2.0 benefits society and masses like no other digital systems in our past. AI world 2.0 is where every entity that produces the data is turned into stand alone Autonomous AI systems. These standalone Autonomous AI systems exchange actionable data, decisions, policies and decision enablers with other autonomous AI systems.
Figure 3 AI world 2.0 with great data dynamics inversion of century enabled by RAP compute
As we see from Figure-3, the data dynamics are inverted compared to Figure-1. In AI world 2.0, the information exchange is more equitable and primary role of data centre and cloud processing will shift from churning labelled data to produce actionable information to aggregating actionable information and policies learnt by autonomous systems and enabling infrastructure to exchange these policies. Biggest work load of cloud compute will be to trace, aggregate, debug and enable policies of autonomous systems. So, AI systems will be net producer of actionable data, policies and decisions. Little or no labelled required in each AI systems. Even when labelled data is used, the use is localized to the AI systems, where it is generated. AI systems are deployed at the data sources, this is possible because RAP architecture enable self-learning AI system from sub 1W to all the way to 200W.
Summary of advantages of AI world 2.0 enabled by RAP are:
- Decision making enabled like never through exchange of actionable data or decisions or policies learnt in real time
- Reduced labelled data required, each AI systems are net producers of data
- Society stands to benefit at large as a net consumer of intelligent actionable information, decisions, decision enablers and policies from AI systems
- Transport will be much safer and predictable, thanks to decision making being well supported by AI systems
- Industries and institutions will benefit from decentralized actionable data and will be able to proliferate their decisions sooner
- Raw data is processed to actionable data and information extracted to make decisions and policies at the source
- AI 2.0 will be most outward looking technology enabler, to move applications beyond social networks and aggregators to intelligent machines exploring unknowns and generating huge amount of actionable data for benefit of human society
- Combination of Blockchain and AI world 2.0 will be most beneficial form of AI that world ever seen.
How does RAP enable AI world 2.0?
RAP compute is based on agents. Agents are group of tensors mapped to a domain. When a domain is defined or semi defined, agents learn by interacting with domain. Domain provides noisy label in terms of reward and observations. These labels are generated automatically and consumed to generate actions. Domain is a source of data and AI agent controls the data distribution. By doing so, outcome of this autonomous or semi-autonomous AI systems are decisions and policies used to generate decisions. Alphaics provides 2 variants of RAP architecture to enable AI world 2.0. RAP-E used for edge compute to generate policies, decisions and actionable data at the source where raw sensory data is generated. RAP-C for data centres to aggregate policies, decisions and actionable data. RAP-C enables tracing, debugging and enabling autonomous systems. RAP-E implements single or multiple autonomous systems.
Autonomous driving enabled by RAP
Autonomous driving is best enabled when infrastructure AI enabled by RAP-C interacts with Autonomous driving AI enabled by RAP-E, through exchange of driving policies, decisions and actionable information. Autonomous driving requires infrastructure and car to work in harmony to achieve safety. Unlike autonomous driving in AI world 1.0, where the neural nets are trained offline and weights are updated by cloud, in AI world 2.0 the newer policies are exchanged between edge AI systems and data centre AI systems without need for massive effort in labelling the data. Each AI systems are net producers of data. Each agent created on RAP-E generates actionable data and information that enable safety in automotive that it is deployed. There can be several combinations of RAP-E deployment. Many agents can be deployed for executing current policy, many other agents can learn newer policies to improve current one. The newer policy learnt can be function of self-produced data and infrastructure policies. Each agent created will interact with environment for certain safety aspects. Multiple agents may generate data for certain safety aspect to stabilize learning. Safety policy learnt is exchanged with infrastructure AI.
Figure 4 Enabling AI world 2.0 by deploying RAP-E at edge and RAP-C in infrastructure
Conversation systems and AI voice assistance enabled by RAP
Conversation system are traditionally measured against system performance and user experience. Traditional measure of conversation system performance include, but not limited to ,
- TTS performance
- ASR performance
- Task ease
- Interaction pace
- User expertise
- System Response
- Expected behaviour
- Future use
These are diverse set of performance measures. TTS and ASR is all about ease of fit between user and conversation systems. User experience, interaction pace and system response require local processing on the voice assistant. TTS and ASR performance can be enhanced greatly using sequential decision making through agents and sending processed query to cloud.
Figure 5 Conversation system in AI world 2.0 enabled by RAP compute
When RAP-ETM is deployed on the voice assistant , it would provide intelligent query to maximize ASR and TTS. Since Query is more refined by using edge processing speed and user experience would enhance. Multiple agents can be deployed to enhance overall performance of the voice assistance and making it AI world 2.0 complaint. Voice assistant can rapidly adapt to user by multi agent learning about different aspect of user. More customized policies of conversations can be sent to Data centres to enhance speed of conversations and user experience.
But autonomous AI systems are difficult
Absolutely! deep learning systems are much easier to implement than autonomous AI systems. We at AlphaICs are aware of this. We have spent resources in researching and mastering the process to enable autonomous AI system. We understand pain of hyper parameter tuning and experimenting to setup RL systems. This is where our value adds to enable autonomous systems comes into play. We provide key technologies, libraries and tools chains along with RAP compute to enable true potential of AI, AI world 2.0 and data dynamics inversions that society and our systems desperately needs. We provide tool chains to tune complex hybrid AI systems, libraries necessaries to design autonomous systems quickly and underlying hardware to design end to end autonomous systems.
Let us collectively enable AI world 2.0 !.
Stay tuned for more blogs on Blockchain enabled by Agent based compute and case studies on search engine optimization.
McAvoy, J.; Butler, T. (2006). “Resisting the change to user stories: a trip to Abilene”. International Journal of Information Systems and Change
Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, William Paul, Michael Jordan, Ion Stoica, Ray: A Distributed Framework for Emerging AI Applications