Skip to main content

Last month, Microsoft held their annual Ignite conference, where a range of new features were announced. In this blog post, We’re going to share just some of the new releases and announcements that we at Mantel Group are particularly excited about!

AI and ML announcements

The main theme of the conference was AI. Microsoft have made some significant investments in AI, and announced some exciting new features that customers can take advantage of today.

Azure Machine Learning received some significant updates, including the ability to use prompt flow in your Large Language Model (LLM) workflows, enabling you to streamline the application lifecycle of your LLM applications. Azure ML also introduced Model Catalogs, which help you find, evaluate and fine-tune foundation models from sources such as OpenAI and Hugging Face, to help you pick the right model for your applications. Soon, Azure ML will also provide Models-as-a-Service that will enable you to integrate modelAPI endpoints without having to manage the underlying GPU Infrastructure.

Microsoft also released Azure AI Studio, a unified AI platform that provides engineers a one-stop-shop to explore, build, test, and deploy AI solutions. Engineers will be able to build generative AI applications using out-of-the-box, and customisable, tooling and models.

Azure AI Vision received some cool updates, such as Liveness functionality to help prevent face recognition spoofing attacks, as well as updates to the Vision SDK and new image analysis models, OCR models, object detection, and more, available through API endpoints. The updates to OCR models have improved accuracy for both typed and handwritten text images.

The buzz continued around Azure OpenAI service, as new multimodal capabilities were announced. DALL-E 3, GPT-3.5, GPT-4 Turbo and GPT-4 Turbo with Vision were all announced to help engineers build generative AI experiences with image, text, and video. OpenAI service can be integrated with Azure AI Vision, allowing GPT-4V models to enhance the experience of including images or videos, along with text, for generating text output. These multimodal AI capabilities are currently in preview, so we’re excited to test them out and see them become generally available soon!

We’re also excited to see closer integration between Azure AI Video Indexer, Azure AI Search and Azure OpenAI Search. Video-to-text summary and efficient video content search capabilities can now enable engineers to extract video content and generate concise text summaries from the content, and then transform that content into a searchable format using LLMs and Video Indexer’s insights.

Finally in the AI space, Azure Cognitive Search became Azure AI Search. Vector search became generally available in AI Search, allowing you to turn documents and data into numerical vector formats, enabling faster and more efficient retrieval. Semantic ranker (previously known as semantic search) also became generally available, enabling engineers to prioritise and ensure that the most relevant search results are returned first.

Platform Engineering announcements

AI also had an impact on the Microsoft Platform Engineering space, as Kaito (Kubernetes AI toolchain operator) was released. With Kaito, engineers can now run specialised ML workloads, like LLMs on Azure Kubernetes Service, with less manual configuration, and with greater cost efficiencies.This is done by automating LLM model deployment on AKS across CPU and GPU resources, by selecting optimally-sized infrastructure for your model. Additionally, Azure Kubernetes Fleet Manager is now GA. This enables multi-cluster and at-scale updates, by allowing admins to orchestrate updates across multiple clusters using update runs, stages and groups.

For engineers who don’t want to deal with the complexities of Kubernetes, Azure Container Apps announced support for dedicated GPU workload profiles. This allows engineers to run ML models using Container Apps as a target platform. This is limited to West US 3 and North Europe regions for now, but we’re keeping an eye on when this will be available in Australia’s data centres.

Outside of AI, there were plenty of announcements in the platform engineering space to get excited about. After a long time in preview, Azure Chaos Studio became generally available. Chaos Studio offers platform engineers the ability to disrupt their applications, uncover reliability issues, and understand how to prevent those issues before they impact users. This helps engineers increase the resilience of their application against faults and failures, by conducting experiments using agent- and service-based faults.

Microsoft DevBox also introduced limits that allow teams to limit the number of dev boxes each developer can create within a project, to help manage costs and ensure efficient use of resources. You can also connect to new Microsoft-hosted networks, if you want to avoid connecting to a virtual network.

Finally, new capabilities for app migration for Linux and Windows were announced. You can now connect multiple app service plans to a single subnet in a virtual network. WebJobs are now available on Linux (Public Preview), along with extensibility support on Linux. gRPC is also generally available for web apps running on App Service for Linux.

Summary

At Ignite, we learnt that Microsoft has made significant investments in both AI and Platform Engineering. As an accredited Microsoft Solutions Partner, Mantel Group is committed to providing solutions and services that genuinely support you and enable your businesses to thrive. If you need help uplifting your platform engineering capabilities, or want to learn more about how you can benefit from Azure AI capabilities in your business, get in touch with us below.