Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Meet our federal team at Space Tech Expo USA

We’re excited to announce our participation in Space Tech Expo 2024, which will take place from May 14 to 15 in Long Beach, California. As our collaboration with Space agencies strengthens, we’re looking forward to meeting our partners and customers on-site to discuss the critical topics for 2024: cybersecurity, artificial intelligence, and open-source innovation.

AI/ML Solutions in Space 

Space organizations invest heavily in AI, aiming to make interstellar travel successful. Agencies kickstart initiatives with different use cases in mind, such as mission planning, autonomous decision-making, resource optimization, and space debris control looking for tooling that enables them to run AI at scale. 

Unlocking real-time space tracking with AI/ML supercomputing

Just recently we worked with the University of Tasmania and Firmus to unlock real-time space tracking with AI/ML supercomputing. 

The University of Tasmania (UTAS) is among the leading providers of space tracking in the southern hemisphere.  The number of new satellites entering the Earth’s orbit is increasing at a near-exponential rate, exceeding 10% growth per year as of 2022, so the university’s work is critical. However, traditional CPU-based data processing had led to a bottleneck that prevented comprehensive, real-time object monitoring. 

What UTAS needed was a modernised infrastructure that could support its immense data processing requirements without drastically inflating costs. The university’s solution was to migrate its space tracking software to Supercloud – a cost-effective and highly sustainable supercomputing platform from Firmus, built using Canonical OpenStack and Canonical Kubernetes, and capable of supporting the most data-intensive AI/ML workloads. 

With Firmus supercomputing based on Canonical infrastructure, UTAS has successfully solved its data processing bottleneck. It has also positioned itself to be able to take advantage of additional MLOps tooling, such as Charmed Kubeflow, that can run on top of Canonical Kubernetes to further simplify AI/ML workflows.

“The UTAS project will help fill the global shortfall in space tracking coverage, and will improve the safety of orbital infrastructure and secure the future of space missions. We are pleased to play our part by contributing Firmus compute resources to support much needed real-time space tracking”. — Peter Blain, Director of Product & AI, Firmus

Explore the case study here. 

To provide the most complete AI solutions to Space agencies, we’ve partnered with the leading hardware, silicon and cloud providers, such as NVIDIA, DELL, AWS, Google Cloud, HPE, Intel, Azure and more. 

Sending the artwork to the International Space Station 

 In 2022, we teamed up with Lonestar to send artwork to the ISS. The artwork is part of a global STEAM initiative and a groundbreaking, immutable data storage, edge processing demonstration currently running aboard the International Space Station (ISS) and led by stealth start-up Lonestar, open-source leader Canonical, and leading space mission integrator Redwire Corporation, a leader in space infrastructure for the next generation space economy. 

Read the full story here. 

Cybersecurity with Ubuntu Pro

Open source technology has been used in space technology for years and it is no surprise that cybersecurity is a key concern for the industry. With our commitment towards securing open source, last year, we announced the general availability of Ubuntu Pro subscription.

It secures an organisation’s Linux estate from OS to the application level. Pro is available on-prem, in the cloud and air-gapped environments, automating security patching, auditing, access management and compliance. Ubuntu Pro delivers FIPS compliance and automation for security standards such as DISA’s Ubuntu STIG, and CIS hardening via the Ubuntu Security Guide (USG).

One of the growing concerns for 2024 is application security. Many open-source packages for applications and toolchains exist in a space with no guarantee or SLA for security patching. With Ubuntu Pro, we secure over 23,000 + open source applications.

If the topic sounds interesting to you, schedule a meeting with our Federal Director Kelley Riggs, for an in-person discussion at Space Tech Expo. 

What is MLflow?

MLflow is an open source platform, used for managing machine learning workflows. It was launched back in 2018 and has grown in popularity ever since, reaching 10 million users in November 2022. AI enthusiasts and professionals have struggled with experiment tracking, model management and code reproducibility, so when MLflow was launched, it addressed pressing problems in the market. MLflow is lightweight and able to run on an average-priced machine. But it also integrates with more complex tools, so it’s ideal to run AI at scale.

A short history

Since MLflow was first released in June 2018,  the community behind it has run a recurring survey to better understand user needs and ensure the roadmap s address real-life challenges.  About a year after the launch, MLflow 1.0 was released, introducing features such as improved metric visualisations, metric X coordinates, improved search functionality and HDFS support. Additionally, it offered Python, Java, R, and REST API stability.

MLflow 2.0 landed in November 2022, when the product also celebrated 10 million users. This version incorporates extensive community feedback to simplify data science workflows and deliver innovative, first-class tools for MLOps. Features and improvements include extensions to MLflow Recipes (formerly MLflow Pipelines) such as AutoML, hyperparameter tuning, and classification support, as well as improved integrations with the ML ecosystem, a revamped MLflow Tracking UI, a refresh of core APIs across MLflow’s platform components, and much more.

In September 2023, Canonical released Charmed MLflow, a distribution of the upstream project.

Why use MLflow?

MLflow is often considered the most popular ML platform. It enables users to perform different activities, including:

  • Reproducing results: ML projects usually start with simplistic plans and tend to go overboard, resulting in an overwhelming quantity of experiments. Manual or non-automated tracking implies a high chance of missing out on finer details. ML pipelines are fragile, and even a single missing element can throw off the results. The inability to reproduce results and codes is one of the top challenges for ML teams.
  • Easy to get started: MLflow can be easily deployed and does not require heavy hardware to run. It is suitable for beginners who are looking for a solution to better see and manage their models. For example, this video shows how Charmed MLflow can be installed in less than 5 minutes.
  • Environment agnostic: The flexibility of MLflow across libraries and languages is possible because it can be accessed through a REST API and Command Line Interface (CLI). Python, R, and Java APIs are also available for convenience.
  • Integrations: While MLflow is popular in itself, it does not work in a silo. It integrates seamlessly with leading open source tools and frameworks such as Spark, Kubeflow, PyTorch or TensorFlow.
  • Works anywhere: MLflow runs on any environment, including hybrid or multi-cloud scenarios, and on any Kubernetes.

MLflow components

MLFlow is an end-to-end platform for managing the machine learning lifecycle. It has four primary components:

MLflow Tracking

MLflow Tracking enables you to track experiments, with the primary goal of comparing results and the parameters used. It is crucial when it comes to measuring performance, as well as reproducing results. Tracked parameters include metrics, hyperparameters, features and other artefacts that can be stored on local systems or remote servers. 

MLflow Models

MLflow Models provide professionals with different formats for packaging their models. This gives flexibility in where models can be used, as well as the format in which they will be consumed. It encourages portability across platforms and simplifies the management of the machine learning models. 

MLflow projects

Machine learning projects are packaged using MLflow Projects. It ensures reusability, reproducibility and portability. A project is a directory that is used to give structure to the ML initiative. It contains the descriptor file used to define the project structure and all its dependencies. The more complex a project is, the more dependencies it has. They come with risks when it comes to version compatibility as well as upgrades.

MLflow project is useful especially when running ML at scale, where there are larger teams and multiple models being built at the same time. It enables collaboration between team members who are looking to jointly work on a project or transfer knowledge between them or to production environments.

MLflow model registry

Model Registry enables you to have a centralised place where ML models are stored. It helps with simplifying model management throughout the full lifecycle and how it transitions between different stages. It includes capabilities such as versioning and annotating, and provides APIs and a UI.

Key concepts of MLflow

MLflow is built around two key concepts: runs and experiments. 

  • In MLflow, each execution of your ML model code is referred to as a run. All runs are associated with an experiment. 

An MLflow experiment is the primary unit for MLflow runs. It influences how runs are organised, accessed and maintained. An experiment has multiple runs, and it enables you to efficiently go through those runs and perform activities such as visualisation, search and comparisons. In addition, experiments let you run artefacts and metadata for analysis in other tools.

Kubeflow vs MLflow

Both Kubeflow and MLFlow are open source solutions designed for the machine learning landscape. They received massive support from industry leaders, and are driven by a thriving community whose contributions are making a difference in the development of the projects.  The main purpose of both Kubeflow and MLFlow is to create a collaborative environment for data scientists and machine learning engineers, and enable teams to develop and deploy machine learning models in a scalable, portable and reproducible manner.

However, comparing Kubeflow and MLflow is like comparing apples to oranges. From the very beginning, they were designed for different purposes. The projects evolved over time and now have overlapping features. But most importantly,  they have different strengths. On the one hand, Kubeflow is proficient when it comes to machine learning workflow automation, using pipelines, as well as model development. On the other hand, MLFlow is great for experiment tracking and model registry. From a user perspective, MLFlow requires fewer resources and is easier to deploy and use by beginners, whereas Kubeflow is a heavier solution, ideal for scaling up machine learning projects.

Read more about Kubefllow vs. MLflow

Go to the blog

Charmed MLflow vs the upstream project

Charmed MLflow is Canonical’s distribution of the upstream project. It is part of Canonical’s growing MLOps portfolio. It has all the features of the upstream project, to which we add enterprise-grade capabilities such as:

  • Simplified deployment: the time to deployment is less than 5 minutes, enabling users to also upgrade their tools seamlessly.
  • Simplified upgrades using our guides.
  • Automated security scanning: The bundle is scanned at a regular cadence..
  • Security patching: Charmed MLflow follows Canonical’s process and procedure for security patching. Vulnerabilities are prioritised based on severity, the presence of patches in the upstream project, and the risk of exploitation.
  • Maintained images: All Charmed MLflow images are actively maintained.
  • Comprehensive testing: Charmed MLflow is thoroughly tested on multiple platforms, including public cloud, local workstations, on-premises deployments, and various CNCF-compliant Kubernetes distributions.

Further reading

Canonical at America Digital Congress in Chile

4 avril 2024 à 14:55

We are excited to share that Canonical participates in America Digital Congress in Santiago, Chile, for the first time ever. It’s one of the leading events in the region about digital transformation bringing together VPs and experts from the most relevant global tech companies. 

Canonical, the publisher of Ubuntu, provides open source security, support and services. In addition to the OS, Canonical offers an integrated data and AI stack. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone.

Join us at the booth A31 to learn how Canonical can support your digital transformation journey securely and cost-efficiently.

Canonical Expert Talk:
How to build a digital transformation strategy



Date & Time: April 11, 16:15 – 16:55.
C-Level Forum AI & Digital Transformation

Juan Pablo Noreña, Canonical Cloud Field Software Engineer, is delighted to be speaking at America Digital Congress about digital transformation and AI. In this talk, he will explore the significant benefits of introducing open source solutions in all stages of the infrastructure implementation process, from virtualization to AI platforms.

Juan Pablo will also showcase how this approximation improves security, reduces costs in the infrastructure life cycle, and makes them predictable, offering companies a competitive advantage in the market.

Key topics:

  • A general perspective of the open source role in infrastructure and its benefits.
  • A guide for decision-makers on how and where to start the development of an infrastructure strategy using open source solutions.
  • Explanation of the relevance of support for the solutions to ensure the sustained success of the strategy.

Canonical Partner Programmes

At Canonical, we provide the services our partners need to ensure their hardware and software works optimally with the Ubuntu platform. We operate a range of partner programmes, from essential product certification to strategic collaboration, help with QA and long-term strategic alliances. For technology customers, this has created a thriving market of suppliers with Ubuntu expertise. 

Are you interested to learn more about our partner programmes? Talk to the team at the booth or visit our partner webpage

Come and meet us at America Digital 

Come visit us at the booth to learn how Canonical could support you in the digital transformation journey. Check out our Data and AI offerings to learn more about our solutions.

Join Canonical Data and AI team at Data Innovation Summit 2024

Canonical is delighted to be a technology partner at the Data Innovation Summit (DIS) in 2024. We are proud to showcase our Data and AI solutions through our conference talk and technology in practice sessions. The event will take place in Kistamässan, Stockholm on April 24-25, 2024. Visit us at booth C71 to learn how open source data and AI solutions can help you take your models to production, from edge to cloud.

Data and AI: get first-hand insights from Canonical experts

The modern enterprise can use AI algorithms and models to learn from their treasure troves of big data, and make predictions or decisions based on the data without being explicitly programmed to do so. What’s more, the AI models grow more accurate over time. 

The magic is in the melding of AI and big data. Data of incredible volume, velocity, and variety is fed into the AI engine, making the AI smarter. Over time, less human intervention is needed for the AI to run properly; in time, the AI can deliver deeper insights—and strategic value—from the ever-increasing pools of data, often in real time. 

In today’s competitive business environment, your AI and data strategies need to be more interconnected than ever. According to an MIT Technology Review survey, 78% of CIOs say that scaling AI to create business value is the top priority of their enterprise data strategy, and 96% of AI leaders agree. Nearly three out of four CIOs also say that data challenges are the biggest factor jeopardising AI success.

The Data Innovation Summit is a significant event in the field of Data and AI, especially in the Nordics. It brings together professionals, enterprise practitioners, technology providers, start-up innovators, and academics working with data and AI. We at Canonical are delighted to announce that we will be participating in this event and sharing our expertise in Data and AI.

Canonical is a well-known publisher of Ubuntu, which is the preferred operating system (OS) for data scientists. In addition to the OS, Canonical offers an integrated data and AI stack. We provide the most cost-effective options to help you gain control over your Total Cost of Ownership (TCO), and ensure reliable security maintenance, allowing you to innovate at a faster pace.

Canonical DIS talk: open source DataOps and MLOps

Canonical data and AI Product Managers, and Andreea Munteanu and Michelle Anne Tabirao will be speaking about open source for your DataOps and MLOps.

Talk description

Open source data and AI tools enable organisations to create a comprehensive solution that covers all stages of the data and machine learning lifecycle. This includes correlating data from various sources, regardless of their collection engine, and serving the model in production. Together, DataOps and MLOps drive the collaboration, communication, and integration that great data and AI teams need, making them essential to the model lifecycle. DataOps is an approach to data management that focuses on collaboration, communication, and integration among data engineers, data scientists, and other data-related roles to improve the efficiency and effectiveness of data processes. MLOps is a set of practices that combines machine learning, software development, and operations to enable the deployment, monitoring, and maintenance of machine learning models in production environments.

In this talk, we will explore how to build an end-to-end solution for DataOps and MLOps using open-source solutions like databases, ML and analytics tools such as OpenSearch, Kubeflow, and MLFlow. Professionals can focus on building ML models without spending time on the tooling operational work. We will highlight some use cases, e.g. in the telco sector, where they use MLOps and DataOPs to optimise the telco network infrastructure and reduce power consumption.

Attendees will learn about the critical factors to consider when selecting tools and best practices needed for building a robust, production-grade ML project.

Come and meet us at DIS 2024

If you are interested in building or scaling your data and AI projects with open source solutions, we are here to help you. Visit our Data and AI offerings to explore our solutions.

Learn more about our Data and AI solutions

Join Canonical at 2024 GTC AI Conference

29 février 2024 à 23:02

As a key technology partner with NVIDIA, Canonical is proud to showcase our joint solutions at NVIDIA GTC again. Join us in person at NVIDIA GTC on March 18-21, 2024 to explore what’s next in AI and accelerated computing. We will be at booth 1601 in the MLOps & LLMOps Pavilion, demonstrating how open source AI solutions can take your models to production, from edge to cloud.

Register for GTC now!

AI on Ubuntu – from cloud to edge

As the world becomes more connected, there is a growing need to extend data processing beyond the data centre to edge devices in the field. As we all know, cloud computing provides numerous resources for AI adoption, processing, storage, and analysis, but it cannot support every use case.  Deploying models to edge devices can expand the scope of AI devices by enabling you to process some of the data locally and achieve real-time insights without relying exclusively on the centralised data centre or cloud. This is especially relevant when AI applications would be impractical or impossible to deploy in a centralised cloud or enterprise data centre due to issues related to latency, bandwidth and privacy. 

Therefore, a solution that enables scalability, reproducibility, and portability is the ideal choice for a production-grade project.  Canonical delivers a comprehensive AI stack with the open source software which your organisation might need for your AI projects from cloud to edge, giving you:

  • The same experience on edge devices and on any cloud, whether private or public or hybrid
  • Low-ops, streamlined lifecycle management
  • A modular and open source suite for reusable deployments

Book a meeting with us

To put our AI stack to the test, during NVIDIA GTC 2024, we will present how our Kubernetes-based AI infrastructure solutions can help create a blueprint for smart cities, leveraging best-in-class NVIDIA hardware capabilities. We will cover both training in the cloud and data centres, and showcase the solution deployed at the edge on Jetson Orin based devices. Please check out the details below and meet our expert on-site.

Canonical’s invited talk at GTC

Accelerate Smart City Edge AI Deployment With Open-Source Cloud-Native Infrastructure [S61494]

Abstract:

Artificial intelligence is no longer confined to data centres; it has expanded to operate at the edge. Some models require low latency, necessitating execution close to end-users. This is where edge computing, optimised for AI, becomes essential. In the most popular use cases for modern smart cities, many envision city-wide assistants deployed as “point-of-contact” devices that are available on bus stops, subways, etc. They interact with backend infrastructure to take care of changing conditions while users travel around the city. That creates a need to process local data gathered from infrastructure like internet-of-things gateways, smart cameras, or buses. Thanks to NVIDIA Jetson modules, these data can be processed locally for fast, low-latency AI-driven insights. Then, as device-local computational capabilities are limited, data processing should be offloaded to the edge or backend infrastructure. With the power of Tegra SoC, data can first be aggregated at the edge devices to be later sent to the cloud for further processing. Open-source deployment mechanisms enable such complex setups through automated management, Day 2 operations, and security. Canonical, working alongside NVIDIA, has developed an open-source software infrastructure that simplifies the deployment of multiple Kubernetes clusters at the edge with access to GPU. We’ll go over those mechanisms, and how they orchestrate the deployment of Kubernetes-based AI/machine learning infrastructure across the smart cities blueprint to profit from NVIDIA hardware capabilities, both on devices and cloud instances.

Presenter: Gustavo Sanchez, AI Solutions Architect, Canonical

Build and scale your AI projects with Canonical and NVIDIA

Starting a deep learning pilot within an enterprise has its set of challenges, but scaling projects to production-grade deployments  brings a host of additional difficulties. These chiefly relate to the increased hardware, software, and operational requirements that come with larger and more complex initiatives.

Canonical and NVIDIA offer an integrated end-to-end solution – from a hardware optimised Ubuntu to application orchestration and MLOps. We enable organisations to develop, optimise and scale ML workloads.

Canonical will showcase 3 demos to walk you through our joint solutions with NVIDIA on AI/ML:

  • Accelerate smart city Edge AI deployments with open-source cloud-native infrastructure – Striving for an architecture to solve Edge AI challenges like software efficiency, security, monitoring and day 2 operations. Canonical, working alongside with NVIDIA, has developed an open-source software infrastructure that simplifies training on private and public clouds as well deployments and operations of AI models on clusters at the edge with access to NVIDIA GPU capabilities.
  • End-to-end MLOps with Hybrid Cloud capable Open-Source tooling –  Cost optimization, data privacy, and HPC performance on GPUs are some of the reasons companies have to consider private cloud, hybrid cloud and multi cloud solutions for their Data and AI infrastructure. Open-source cloud agnostic infrastructure for Machine Learning Operations gives companies flexibility to expand beyond public cloud vendor lock-ins, alignment with restricted data compliance constraints and capabilities to take full advantage of their hardware resources, while automating day to day operations.
  • LLM and RAG open-source infrastructure – This demo shows an implementation of an end-to-end  solution from data collection and cleaning to training and inference usage of an open-source large language model integrated using the retrieval augmented generation technique on an open-source vector database. It shows how to scrape information out of your publicly available company website to be embedded into the vector database and to be consumed by the LLM model.

Visit our Canonical booth 1601 at GTC to check them out.

Come and meet us at NVIDIA GTC 2024

If you are interested in building or scaling your AI projects with open source solutions, we are here to help you. Visit ubuntu.com/nvidia to explore our joint data centre offerings.

Book a meeting with us

Learn more about our joint solutions

Explore Canonical & Ubuntu at Past GTCs

Warp, Rust-Based Terminal with AI, is Now Available on Linux

Par : Joey Sneddon
22 février 2024 à 22:42

Linux developers have a new toy to play with, Warp. Warp is a (currently) closed-source terminal emulator built using the Rust programming language. It offers hardware acceleration, integrated AI, collaborative capabilities, and uses a “block” based approach to group commands and output that help set it apart from traditional console-based tools. Plus, when it comes to text input Warp functions more like an IDE or text editor by offering filtering and selections, cursor positioning (including multiple cursors), auto-completion, syntax highlighting, and more besides — the following video gives a good overview: Previously a Mac-only app, Warp is now available for […]

You're reading Warp, Rust-Based Terminal with AI, is Now Available on Linux, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Edge AI: what, why and how with open source

19 février 2024 à 09:55

Edge AI is transforming the way that devices interact with data centres, challenging organisations to stay up to speed with the latest innovations. From AI-powered healthcare instruments to autonomous vehicles, there are plenty of use cases that benefit from artificial intelligence on edge computing. This blog will dive into the topic, capturing key considerations when starting an edge AI project, main benefits, challenges and how open source fits into the picture.

What is Edge AI?

AI at the edge, or Edge AI, refers to the combination of artificial intelligence and edge computing. It aims to execute machine learning models on interconnected edge devices. It enables devices to make smarter decisions, without always connecting to the cloud to process the data. It is called edge, because the machine learning model runs near the user rather than in a data centre.

Edge AI is growing in popularity as industries identify new use cases and opportunities to optimise their workflows, automate business processes or unlock new chances to innovate. Self-driving cars, wearable devices, security cameras, and smart home appliances are among the technologies that take advantage of edge AI capabilities to deliver information to users in real-time when it is most essential. 

Benefits of edge AI

Nowadays, algorithms are capable of understanding different tasks such as text, sound or images. They are particularly useful in places occupied by end users with real-world problems. These AI applications would be impractical or even impossible to deploy in a centralised cloud or enterprise data centre due to issues related to latency, bandwidth and privacy.

Some of the most important benefits of edge AI are:

  • Real time insights: Since data is analysed real time, close to the user, edge AI enables real time processing and reduces the time needed to complete activities and derive insight.
  • Cost savings: Depending on the use case, some data can often be processed at the edge where it is collected, so it doesn’t all have to be sent to the data centre for training the machine learning algorithms. This reduces the cost of storing the data, as well as training the model. At the same time, organisations often utilise edge AI to reduce the power consumption of the edge devices, by optimising the time they are on and off, which again leads to cost reduction.
  • High availability: Having a decentralised way of training and running the model enables organisations to ensure that their edge devices benefit from the model even if there is a problem within the data centre.
  • Privacy: Edge AI can analyse data in real time without exposing it to humans, increasing the privacy of appearance, voice or identity of the objects involved. For example, surveillance cameras do not need someone to look at them, but rather have machine learning models that send alerts depending on the use case or need.
  • Sustainability: Using edge AI to reduce the power consumption of edge devices doesn’t just minimise costs, it also enables organisations to become more sustainable. With edge AI, enterprises can avoid utilising their devices unless they are needed.

Use cases in the industrial sector

Across verticals, enterprises are quickly developing and deploying edge AI models to address a wide variety of use cases. To get a better sense of the value that edge AI can deliver, let’s take a closer look at how it is being used in the industrial sector.

Industrial manufacturers struggle with large facilities that often use a significant number of devices. A survey fielded in the spring of 2023 by Arm found that edge computing and machine learning were among the top five technologies that will have the most impact on manufacturing in the coming years. Edge AI use cases are often tied to the modernisation of existing manufacturing factories.  They include production scheduling, quality inspection, and asset maintenance – but applications go beyond that. Their main objective is to improve the efficiency and speed of automation tasks like product assembly and quality control.

Some of the most prominent use cases of Edge AI in manufacturing include:

  • Real-time detection of defects as part of quality inspection processes that use deep neural networks for analysing product images. Often, this also enables predictive maintenance, helping manufacturers minimise the need to reactively fix their components by instead addressing potential issues preemptively. 
  • Execution of real-time production assembly tasks based on low-latency operations of industrial robots. 
  • Remote support of technicians on field tasks based on augmented reality (AR) and mixed reality (MR) devices; 

Low latency is the primary driver of edge AI in the industrial sector. However, some use cases also benefit from improved security and privacy. For example, 3D printers3d printers can use edge AI to protect intellectual property through a centralised cloud infrastructure.

Best practices for edge AI

Compared to other kinds of AI projects, running AI at the edge comes with a unique set of challenges. To maximise the value of edge AI and avoid common pitfalls, we recommend following these best practices:

  • Edge device: At the heart of Edge AI are the devices which end up running the models. They all have different architectures, features and dependencies. Ensure that the capabilities of your hardware align with the requirements of your AI model, and ensure that the software – such as the operating system – is certified on the edge device.. 
  • Security: Both in the data centres and on the edge devices there are artefacts that could compromise the security of an organisation. Whether we talk about the data used for training, the ML infrastructure used for developing or deploying the ML model, or the operating system of the edge device, organisations need to protect all these artefacts. Take advantage of the appropriate security capabilities to safeguard these components, such as secure packages, secure boot of the OS from the edge device, or full-disk encryption on the device.
  • Machine learning size: Depending on the use case, the size of the machine learning model is different. It needs to fit on the end device that it is intended to run, so developers need to optimise the model size dictate the chances to successfully deploying it.
  • Network connection: The machine learning lifecycle is an iterative process, so models need to be periodically updated. Therefore, the network connection influences both the data collection process as well as the model deployment capabilities. Organisations need to check and ensure there is a reliable network connection before building deploying models or building an AI strategy.
  • Latency: Organisations often use edge AI for real-time processing, so the latency needs to be minimal. For example, retailers need instant alerts when fraud is detected and cannot ask customers to wait at the cashiers for minutes before confirming payment. Depending on the use case, latency needs to be assessed and considered when choosing the tooling and model update cadence.
  • Scalability:  Scale is often limited to the cloud bandwidth to move and process information. It leads to high costs. To ensure a broader range of scalability, the data collection and part of the data processing should happen at the edge. 
  • Remote management: Organisations often have multiple devices or multiple remote locations, so scaling to all of them brings new challenges related to their management. To address these challenges, ensure that you have mechanisms in place for easy, remote provisioning and automated updates.

Edge AI with open source

Open source is at the centre of the artificial intelligence revolution, and open source solutions can provide an effective path to addressing many of the best practices described above. When it comes to edge devices, open source technology can be used to ensure the security, robustness and reliability of both the device and machine learning model. It gives organisations the flexibility to choose from a wide spectrum of tools and technologies, benefit from community support and quickly get started without a huge investment. Open source tooling is available across all layers of the stack, from the operating system that runs on the edge device, to the MLOps platform used for training, to the frameworks used to deploy the machine learning model.

Edge AI with Canonical

Canonical delivers a comprehensive AI stack with all the open source software organisations need for their edge AI projects.

Canonical offers an end-to-end MLOps solution that enables you to train your models. Charmed Kubeflow is the foundation of the solution, and it is seamlessly integrated with leading open source tooling such as MLflow for model registry or Spark for data streaming. It gives organisations flexibility to develop their models on any cloud platform and any Kubernetes distribution, offering capabilities such as user management, security maintenance of the used packages or managed services.

The operating system that the device runs plays an important role. Ubuntu Core is the distribution of the open source Ubuntu operating system dedicated to IoT devices. It has capabilities such as secure boot and full disk encryption to ensure the security of the device.  For certain use cases, running a small cloud, such as Microcloud enables unattended edge clusters to leverage machine learning.

Packaging models as snaps makes them easy to maintain and update in production. Snaps offer a variety of benefits including OTA updates, auto rollback in case of failure and no touch deployment. At the same time, for managing the lifecycle of the machine learning of the model and the remote management, brand stores are an ideal solution..

To learn more about Canonical’s edge AI solutions, get in touch.

Further reading

5 Edge Computing Examples You Should Know

How a real-time kernel reduces latency in telco edge clouds

MLOps Toolkit Explained

Mozilla Announces Layoffs, Renews Focus on Firefox

Par : Joey Sneddon
13 février 2024 à 23:38

Mozilla, makers of the Firefox web browser, is the latest tech company to announce layoffs. The non-profit says it is scaling back development on a number of projects and, as a result, 60 employees (roughly 5% of its total workforce) will lose their jobs. Among projects TechCrunch reports Mozilla has earmarked for cutbacks is its Online Footprint Scrubber — a paid-for feature announced barely a week ago! Mozilla VPN, Relay and other privacy products are also being scaled back, with the company of the opinion those products don’t offer much differentiation with competitors, and are struggling as a result. The Mozilla.social […]

You're reading Mozilla Announces Layoffs, Renews Focus on Firefox, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

AI on-prem: what should you know?

30 janvier 2024 à 13:11

Organisations are reshaping their digital strategies, and AI is at the heart of these changes, with many projects now ready to run in production. Enterprises often start these AI projects on the public cloud because of the ability to minimise the hardware burden. However, as initiatives scale, organisations often look to migrate the workloads on-prem for reasons including costs, digital sovereignty or compliance requirements. Running AI on your own infrastructure comes with clear benefits, but it also raises some major challenges that infrastructure and MLOps experts need to consider.

MLOps acts as the enabler in running AI workloads in a repeatable and reproducible manner. MLOps platforms such as Charmed Kubeflow are cloud-native applications that run on Kubernetes. Building such an architecture on-prem helps organisations to easily deploy, manage and scale their AI applications.

Advantages of AI on-prem

When building their AI strategies, organisations should consider factors such as cost-effectiveness, ability to manage, security and compliance, and performance. Let’s take a look at how running AI projects on-prem addresses these priorities

AI on existing infrastructure

Building a completely new data centre for AI projects can be overwhelming and take time, but it isn’t always necessary. If you already have existing infrastructure that you aren’t fully utilising, it could be suitable for your AI initiatives. Doing AI on-prem on existing infrastructure is a great way to quickly kickstart new projects and experiments, assess the possible return on investment of different use cases, and gain additional value from your existing hardware.

Secure ML workloads on-prem

Many organisations have already well-defined internal policies that also need to be followed by any new AI initiatives. Adhering to these policies is easier using on-prem infrastructure, ensuring a secure and compliant foundation for the MLOps platform and enabling you to build repeatable and reproducible ML pipelines.  Especially in highly regulated industries, running AI on-prem could accelerate compliance and security check-ups, helping you to focus on building models, rather than security concerns.

Cost-effective solution

While public clouds nowadays offer different types of instances to run machine learning workloads, for enterprises that store all their data on their own infrastructure, moving it would come with a significant cost. You can circumvent this challenge entirely by running your AI projects in the same location that you are already storing your data. This is one of the reasons why organisations often prefer building their AI workloads on-prem

Disadvantages of AI on-prem

Building and scaling AI projects requires computing power. For organisations that need more computing power, this is a big investment to make before even getting started. At the same time, on-prem infrastructure requires a significant upfront cost and comes with the burden of operating the infrastructure post-deployment. On-prem deployments also have only a limited number of pre-trained models and ready-made services that enterprises can take advantage of. 

At the opposite end of the spectrum, public clouds are easy to get started and do not require a big investment. They have big libraries of pre-trained models, such as Amazon BedRock, that can give organisations a head-start. That being said, public clouds often prove to be less cost-effective in the long-term.

Rolling out a new strategic initiative such as an artificial intelligence project comes with a new set of challenges. When deciding whether to run your AI initiatives on-prem, there are a number of key factors you should consider to determine whether it’s the right approach for you:

When should you run AI on-prem?

  • Compute performance: It’s no secret that AI projects require significant computing power, and these requirements are only increasing. You should only commit to an on-prem AI strategy if you are certain that you have the resources to satisfy these compute demands, with room to scale. 
  • Industry regulations: Complying with industry regulations is often easier when you have full control over your data on your own hardware. If you operate in highly-regulated sectors such as healthcare or financial services, then on-prem AI is likely to be the right choice. 
  • Privacy: These same principles extend to the broader realm of data privacy, which plays an important role in any AI project. On-prem infrastructure represents a compelling option for organisations looking to maximise control over their data and ML models.
  • Initial investment: The best infrastructure option will depend largely on the budget allocated for the initial investment. If you lack the resources to support upfront hardware costs, public cloud may be more suitable – unless you have existing, unutilised on-prem infrastructure that you can take advantage of.
  • Customisable solution: Do you want a ready-made solution, or a platform that enables you to customise your AI deployment to suit your specific requirements? If you’re looking for flexibility, on-prem is the clear winner.

Open source solutions for AI on-prem

Open source is at the heart of the AI revolution. There are a growing number of open source solutions that benefit from wide adoption in the machine-learning world. Organisations can build a fully open source MLOps platform on-prem using some of the leading tools available:

  • OpenStack: a fully functional cloud platform that ensures smooth integration with leading performance acceleration devices, such as GPUs.
  • Kubernetes: can be used as a container orchestration tool.
  • Kubeflow: a MLOps platform to develop and deploy machine learning models.
  • MLflow: a machine learning platform for model registry. 

Open source tools come with plenty of benefits. However, it is important to choose the right versions. To ensure the security of the tooling as well as seamless integration, organisations need official distributions that are suitable for enterprise deployments – such as those delivered by Canonical.

Want to learn more about AI on private cloud with open source? Enroll now for our live webinar 

Hybrid strategy with open source 

According to the Cisco 2022 Global Hybrid Cloud Trends Report, 82% of IT decision-makers have adopted a hybrid IT strategy. Correlating this with all the focus that organisations put nowadays on their artificial intelligence strategy, it is easy to notice that many of the new projects will run on a hybrid cloud scenario. Open source tools – like those that Canonical supports and integrates in an end to end solution – , mentioned also before enable organisations to build and scale their AI initiatives on their cloud of choice. Users can  It helps them kickstart on a public cloud to minimise the hardware burden and then develop a hybrid cloud strategy that ensures time effectiveness and cost efficiency. 

AI webinar series

Follow our webinar series and stay up to date with the latest news from the industry.

Further reading

Meet Canonical at Mobile World Congress Barcelona 2024

26 janvier 2024 à 09:01

The world’s largest and most influential telecommunications exhibition event, Mobile World Congress (MWC), is taking place in Barcelona on 26-29 February 2024. Canonical is excited to join this important annual event once again and meet the telecom industry. 

Telecommunications is a key sector for Canonical. We offer solutions for private, public and hybrid/multi cloud environments, with a consistent experience across the entire telecom spectrum, from core clouds to the edge, with a single set of tooling. Built with the same philosophy as Ubuntu – secure, trusted and production-grade open source backed by full operations support – our solutions are fully upstream and integrate the latest technology advancements that telco leaders require to deliver best-in-class services to their customers. 

We are looking forward to meeting you at MWC 2024. Come and speak with our experts to learn how we can help you in your journey to cost-effective, secure and trusted open source telecom solutions for your infrastructure.

Hot topics in telco

To meet today’s customer expectations, telecom operators require flexible, scalable and agile operations across the many service types that make up a modern mobile network.

At this year’s MWC event in Barcelona, Canonical’s team will explain how you can elevate your telecom infrastructure with the latest innovations in cloud-native technologies and modernise your telco clouds with open source. These strategies will empower you to meet and exceed customer expectations with repeatable and reliable deployments.

Automation at scale for telco edge clouds with CNEP

We have been listening to our telco customers to understand their needs in delivering cost-effective modern edge clouds for their infrastructure that they can rely on. Canonical is proud to offer a new holistic solution, Cloud Native Execution Platform (CNEP) to meet these needs precisely at telco edge clouds.

With CNEP, we deliver the ideal software stack for telco edge clouds with automation in place, based on fully upstream and CNCF certified Kubernetes running on bare metal hardware for best performance. It brings all essential open source components together, with the aim of achieving high performance in data processing and delivery, whilst ensuring platform security and efficiency with Ubuntu Pro.

At MWC, our team will explain how operators can achieve scalable and repeatable deployment of edge clouds with CNEP. For Open Radio Access Network (RAN) readiness, CNEP is the ideal RAN platform, bringing all the technology features that cloud-native Open RAN components require. CNEP is also tailored for best performance and security assurance for distributed compute and multi-access edge computing (MEC) applications, enabling businesses to run their telco workloads on 5G edge networks.

Real-time Ubuntu for ultra-reliable and low-latency communications

Canonical has been working with all major silicon hardware vendors, such as Intel, to deliver the highest levels of performance and security to telco networks and applications. 

We have been running advanced engineering programs with the aim of enabling the latest innovations in silicon hardware in telco software infrastructure at a rapid pace, with quick software release cycles. As part of our collaboration with Intel, we have integrated Intel FlexRAN in Ubuntu real-time kernel for telco applications and networking software, which has enabled real-time processing at both operating system and silicon levels.

At this year’s MWC, we will explain how Ubuntu Pro brings real-time data processing capabilities to the telco edge for mission-critical operations and also ensures confidential computing for the most-sensitive telco workloads.

Sustainable telco edge infrastructure with an energy-efficient system stack

Telecom networks will increasingly deploy edge cloud sites in the journey to distributed and flexible cloud-native operations. This requires support for several features across the hardware and software stack to make sure that platforms are energy and cost efficient. From container images to bare metal hardware automation, Canonical’s edge cloud stack is equipped with features that ensure sustainable operations.

In Barcelona, we will explain how our open source software stack can deliver optimal deployments on telco edge clouds and help operators meet their sustainability goals.

Demos

At MWC 2024, you will get the chance to see our technical team demonstrate Canonical’s CNEP solution. This is a great opportunity for all players in the telco ecosystem to see how we meet sector requirements on cloud-native operations at the telco edge with automation. In our demo, the Canonical team will run CNEP on Intel’s 4th Generation Xeon Scalable Processor, bringing the acceleration capabilities provided by Xeon to large-scale edge network rollout for cost-efficient Open RAN deployments.

CNEP’s open and upstream APIs along with Canonical’s observability stack and telemetry solutions enable machine learning algorithms to assist edge cloud operations. The Canonical team will demonstrate how our AI/ML platform solutions can be used to boost the effectiveness of distributed computing applications running on telco edge clouds. We will show how a multi-cloud data platform can be formed for various data types collected from a telecom network. We will also show ML-based anomaly detection and LLM to summarise and explain collected data from the network. 

Come and meet us at MWC 2024

If you are interested in building your own modern telecom infrastructure and migrating to open source with cost-effective, secure and trusted solutions, Canonical can help you. We provide a full stack for your telecom infrastructure, enabling secure, trusted, flexible, optimised, automated and low-touch operations.

To learn more about our telco solutions, meet us to discuss your telecom needs at MWC Barcelona 2024, and visit our webpage at ubuntu.com/telco.

If you’re unable to find a suitable time, please reach out to Mariam Tawakol <mariam.tawakol@canonical.com> or Jacob Boe <jacob.boe@canonical.com>. Let them know your availability and what you’re interested in, and they will set up a meeting for you.

Further reading

Canonical joins Open Networking Foundation

Fast and reliable telco edge clouds with Intel FlexRAN and Real-time Ubuntu

Bringing automation to telco edge clouds at scale

How telcos are building carrier-grade infrastructure using open source

Ubuntu AI podcast: AI for day-to-day tasks

25 janvier 2024 à 07:57

Welcome to Ubuntu AI podcast, where we talk about AI with the industry leaders.

This episode was recorded in Riga, during the Ubuntu Summit 2023. We’re talking about the implementation of AI solutions for day-to-day tasks with the CEO of Nextcloud Frank Karlitschek.

AI usage in Nextcloud

We are talking about the AI usage at Nextcloud and privacy plays a big role there. Listen to the episode to learn more about how to ensure customer’s privacy when implementing AI solutions. We will also dive deeper into use-cases for Nextcloud.

Implementing AI solutions within your organization

You can built all your AI projects with secure and supported Canonical MLOps. Stable, secure, scalable tooling is a priority for enterprises. Having AI that enterprises can benefit from is critical.

If you are still defining the use-cases within your organization, our expert team is here to provide Canonical’s AI consulting services, designed to support you in every step of your journey.

Learn more about Canonical AI solutions here.

Download our guide to MLOps. Take your AI projects to production.

❌
❌