Amazon Web Services Inc. today expanded its lineup of artificial intelligence capabilities by introducing an enhanced version of SageMaker, its neural network development platform.
Originally introduced in 2017, SageMaker includes more than a dozen AI development tools. Software teams use the platform to create neural networks, train them, monitor their performance after they’re deployed and perform related tasks. AWS disclosed today on occasion of the updates to SageMaker that the platform is used by tens of thousands of customers.
“Many customers are using ML at a scale that was unheard of just a few years ago,” said Bratin Saha, the vice president of artificial intelligence and machine learning at AWS. “The new Amazon SageMaker capabilities announced today make it even easier for teams to expedite the end-to-end development and deployment of ML models.”
Amazon SageMaker Studio Notebooks
SageMaker includes a tool called Amazon SageMaker Studio Notebooks that developers can use to create new neural networks. It’s a managed version of Jupyter, a popular AI development platform. With Jupyter, developers can prepare a dataset for analysis, create a neural network to process that dataset and then view the processing results in the same interface.
SageMaker Studio Notebooks is receiving a new feature that can help developers spot errors in the data they process as part of AI projects. According to AWS, the feature identifies data quality issues and recommends ways to remediate them. When a developer selects one of the suggested remediation methods, the tool automatically generates the software code necessary to implement it.
Deploying a neural network created with Jupyter is often a time-consuming task. Developers must package the neural network into a software container along with any external components that the container may require to run. From there, they must provision cloud infrastructure to host the software.
SageMaker Studio Notebooks can now help with that task as well, according to AWS. A newly added automation feature can package neural networks into software containers without requiring manual work on developers’ part. Moreover, the feature provisions infrastructure for running neural networks and deprovisions hardware resources when they’re no longer needed.
In the enterprise, AI applications are often developed by one but multiple software teams. SageMaker Studio Notebooks is receiving a collaboration feature that will make it easier for teams to share AI model code and other software components with one another. According to AWS, the feature makes it possible to organize the components of an AI project in a shared workspace.
Streamlined AI development
SageMaker Studio Notebooks is one of several AI development tools that AWS provides as part of SageMaker. With today’s updates, AWS is also enhancing several other components of the platform.
Some companies use SageMaker to build neural networks that process geospatial datasets, or datasets that include information about specific locations. A logistics company, for example, can build an AI that analyzes road traffic in a given city and finds the fastest delivery routes. AWS is adding new features to SageMaker that will ease the creation of AI models capable of analyzing geospatial data.
SageMaker now enables users to incorporate geospatial data from external sources into an AI project with a few clicks. According to AWS, developers can retrieve information from its Amazon Location Service map platform, open-source datasets and proprietary sources such as satellite constellations.
Because of its complexity, geospatial data often can’t be analyzed in its original form. AWS has equipped SageMaker with features that can automatically turn geospatial data into a form that lends itself better to processing. In conjunction, AWS is adding a collection of pre-trained AI models that can apply geospatial data to use cases such as urban planning and crop yield monitoring.
New AI testing features
After developers create a neural network with SageMaker, they can use a new capability called shadow testing to ensure it will work as expected. According to AWS, the capability is more effective than traditional methods of evaluating AI applications’ reliability.
Shadow testing uses a company’s existing AI software to evaluate the reliability of new neural networks. It creates a copy of the user requests sent to a company’s existing AI software. Then, the feature sends the copy to the new neural network that the company is testing and checks whether the neural network can process the copy reliably.
According to AWS, SageMaker’s shadow testing feature automatically creates a monitoring dashboard for evaluating AI applications. The dashboard tracks metrics such as latency and error rates. Using the feature, developers can compare the performance of a new neural network with existing software before rolling it out to production.
Simplified AI governance
AWS debuted the new development features alongside a set of AI governance tools. According to the cloud giant, companies can use the tools to ensure that AI projects powered by SageMaker comply with cybersecurity rules and other internal requirements.
The first tool, Amazon SageMaker Role Manager, allows administrators to more easily regulate user access to a company’s SageMaker environment. Administrators can configure who can access what SageMaker features and how through a centralized console.
Another newly added AI governance tool, Amazon SageMaker Model Cards, will help software teams manage the data produced as part of machine learning projects. That data includes items such as AI training datasets and the results of neural network reliability tests. According to AWS, SageMaker Model Cards enable engineers to store such information in a centralized location for easy access.
Rounding out the lineup of AI governance features that AWS introduced for SageMaker today is the Amazon SageMaker Model Dashboard. It provides a console for monitoring the reliability of AI models after they’re deployed in production. The tool can help administrators detect and more quickly fix errors such as a sudden decline in the accuracy of an AI application’s processing results.