Specialized requirements, interoperability, and cloud vs. edge computing among key factors in AI for 2019
2019 is poised to be a big year for data science for engineering teams. Emerging technologies such as artificial intelligence (AI) will continue to proliferate across various industries. Hype around the technologies and applications makes it difficult for engineers to set realistic expectations.
To help engineers navigate the coming year, outlined below are some trends and predictions that stand to make a major impact in 2019:
Deep learning adoption
continues to expand
Data scientists won’t be the only ones driving experimentation and adoption of deep learning. Technical curiosity, the potential benefits and promise of AI, and automation tools will empower more engineers and scientists to deploy artificial intelligence. Advances in deep learning workflow tools simplify and automate data synthesis, labeling, tuning, and deployment, making AI accessible beyond just data scientists.
These tools also broaden the range of applications—from imaging and computer vision to time-series data like audio, signal, and Internet of Things (IoT). Such applications will include unmanned aerial vehicles using AI technology for object detection in aerial imaging, to improved pathology diagnosis for early disease detection during cancer screenings.
Aerial imagery helps farmers with smart spraying and weed detection. Image courtesy of Gamaya.
New requirements for industrial applications needed
Smart cities, predictive maintenance, Industry 4.0, and other IoT and AI-led applications demand a set of criteria be met as these move from visionary concepts to reality. Examples include safety-critical applications needing increased reliability and verifiability; low-power, mass-produced, and moving systems that require form factors; and advanced mechatronic design approachesthat integrate mechanical, electrical, and other components.
These specialized applications are often developed and managed by decentralized (not under IT) development and service teams, presenting another challenge. Application examples range from agricultural equipment using AI for smart spraying and weed detection to overheating detection on aircraft engines.
Interoperability key to assembling a complete AI solution
No single framework exists that provides a “best-in-class” solution for AI. Currently, each deep learning framework tends to focus on a few applications and production platforms, while effective solutions require assembling pieces from several different workflows, creating friction and reducing productivity.
Organizations like ONNX (http://onnx.ai) are addressing these interoperability challenges, which will enable developers to choose the best tool, more easily share their models, and deploy their solutions to a wider set of production platforms.
Public clouds poised to gain popularity as AI host platform
Public clouds will be used more as the host platform for artificial intelligence and will evolve to reduce complexity, while also having less reliance on IT departments. GPUs, flexible storage options, and production-grade container technology are a few of the reasons that AI applications are increasingly based in the cloud.
For engineers and scientists, cloud-based development eases collaboration and enables on-demand use of computing resources rather than buying expensive hardware with limited lifespan. Vendors of cloud, hardware, and software products recognize, however, that this technology stack is often difficult to set up and use in their development workflows.
Edge computing will enable new AI applications
Specifically, edge computing will enable AI applications that require local processing. Advances in sensors and low-power computing platforms will enable edge computing with high-performance, real-time, and increasingly-complex AI solutions.
Edge computing will be critical to safety in autonomous vehicles that need to understand their local environment and assess real-time driving options. This may yield potentially huge cost savings for remote locations with limited or expensive internet connectivity, such as deep oil platforms.
Increased use of deep learning will require greater collaboration
The increased use of deep learning will necessitate more participants and greater collaboration. Data collection, synthesis, and labeling are increasing the scope of complexity of deep learning projects, requiring larger, decentralized teams. Systems and embedded engineers will require flexibility to deploy inference models to data centers, cloud platforms, and embedded architectures such as FPGAs, ASICs, and microcontrollers.
Optimization, power management, and component reuse will also be required. Engineers developing deep learning models will need tools to experiment and manage the increasing volumes of training data and lifecycle management of the inference models they handoff to system engineers.