Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Kubernetes backups just got easier with the CloudCasa charm from Catalogic

Kubernetes is very straightforward for deploying and managing stateless applications. Consequently, proper backups of the Kubernetes environment have often been postponed or neglected. Today, however, more and more applications running on Kubernetes are stateful, making backup and recovery solutions for these systems critical. Neglecting backup is risky: as applications become more complex and need to preserve their states, the challenges of recovering a setup increase. Therefore, platform engineers need a comprehensive backup solution for Kubernetes.

CloudCasa is a SaaS offering that provides data protection, recovery, and migration for Kubernetes. It provides a best-in-class solution for multi-cluster, multi-cloud and hybrid-cloud Kubernetes environments.


Caption: The CloudCasa dashboard, where it all comes together

The CloudCasa service comes in two variants: The CloudCasa Pro service provides centralised backup services for large, complex, multi-cluster, multi-cloud, and hybrid cloud environments. The recently introduced CloudCasa for Velero service brings similar enterprise features to existing installations of Velero, the popular open source Kubernetes backup solution. These features include centralised management and monitoring, alerts, job history, guided recovery, RBAC, and commercial support. Taking advantage of these leading-edge Kubernetes backup capabilities is now easier than ever, as Catalogic has released a CloudCasa charm.

Why use the CloudCasa charm?

The charm for CloudCasa is a Kubernetes operator created with the Charm SDK for Juju. Juju is an open source orchestration engine for software operators that enables the deployment, integration and lifecycle management of applications at any scale, on any infrastructure. An application can be a database, a web server, a dashboard or, in this case, backup software.

As a SaaS offering, an important part of CloudCasa’s value proposition is easy set-up and configuration, and minimal time to first backup. The only software component that customers need to install is the CloudCasa agent, so the engineering team is always looking for ways to make installation and upgrade of the agent easier for customers. They also wanted a native integration for Canonical’s Kubernetes platform. Juju was the perfect fit, and the charm makes consuming CloudCasa seamless for users.

“We’re excited that, working with our partners at Canonical, we were able to quickly implement the CloudCasa charm, providing convenient installation and management of our agent and consistent operations for customers using the Juju framework,” said Bob Adair, Head of Product Management at CloudCasa. ”As Kubernetes becomes more prevalent in the enterprise and the applications running on it become stateful and more complex, DevOps, SRE, and platform engineering teams need to think seriously about adequately protecting and managing these critical environments. This is where both CloudCasa and charms come in.”

How to install CloudCasa using the Charm

The CloudCasa charm can be installed directly from Charmhub using just a few simple Juju commands. The charm is available on charmhub.io. To get started with an existing cluster and an installation of Juju execute:

$ juju deploy cloudcasa

Then as the bare minimum of configurations, you set the cluster id:

$ juju config cloudcasa clusterid=<clusterid>

The CloudCasa application provides a way to trigger the installation of the charm. Users can access the Add and Edit Cluster wizards in the CloudCasa UI directly. 

Caption: Select the charm when configuring a K8s cluster in the Add Cluster wizard

About CloudCasa by Catalogic

CloudCasa by Catalogic is a Kubernetes backup-as-a-service providing innovative multi-cloud data protection, migration, and disaster recovery for Kubernetes applications and cloud data services. CloudCasa enables multi-cluster and multi-cloud application resiliency and mobility with granular or cluster-level recovery, across accounts, regions and even across clouds. CloudCasa is fully compatible and complementary to Velero, the open-source Kubernetes backup tool that has been downloaded over 100 million times. Signup for CloudCasa for Velero.

Read more

More questions about operators, Juju and charms?

Contact Canonical

(Photo by Taylor Vick on Unsplash)

Canonical Delivers Secure, Compliant Cloud Solutions for Google Distributed Cloud

9 avril 2024 à 10:55

Today, Canonical is thrilled to announce our expanded collaboration with Google Cloud to provide Ubuntu images for Google Distributed Cloud. This partnership empowers Google Distributed Cloud customers with security-focused Ubuntu images, ensuring they meet the most stringent compliance standards.

Since 2021, Google Cloud, with its characteristic vision, has built a strong partnership with Canonical. This collaboration highlights both companies’ commitment to providing customers with the air-gapped cloud solutions they need. Through this partnership, Google Cloud demonstrates its strategic brilliance – delegating foundational image creation and maintenance to Canonical’s expertise, allowing Google Cloud to focus on the heart of Google Distributed Cloud development. Canonical’s dedication to rigorous testing upholds the reliability that data centers demand. Moreover, proactive support helps swiftly tackle critical issues, ensuring seamless data center operations. This partnership is a testament to the power of strategic collaborations in the tech sector:

  • GDC Ready OS Images: Canonical supports multiple active releases of Google Distributed Cloud (1.9.x, 1.10.x, 1.11.x, and 1.12.x) ensuring Google Cloud has flexibility and choice.
  • Risk Mitigation: Canonical employs a two-tiered image system–”development” and “stable.” This allows for thorough testing of changes before they are released into the stable production environment, minimizing potential problems.

These key benefits are the result of our unwavering pursuit of progress and innovation. Google Distributed Cloud customers can expect to reap the rewards of our continuous hard work:

  • FIPS & CIS Compliance: Google Distributed Cloud customers operating in highly regulated industries can confidently deploy FIPS-compliant and CIS-hardened Ubuntu images, knowing they adhere to critical security standards.
  • Multi-distro Support: Ubuntu’s adaptability allows Google Distributed Cloud users to run a diverse range of distro images, maximizing their choice and flexibility within the cloud environment.
  • Air-gapped Innovation: Canonical and Google Cloud are dedicated to supporting air-gapped cloud technology, providing secure, cutting-edge solutions for customers with even the most sensitive data requirements.

At Canonical, we’re committed to open-source innovation. This collaboration with Google Cloud is a prime example of how we can work together to deliver industry-leading cloud solutions to our customers. We look forward to continued partnership and providing even more value to the Google Distributed Cloud ecosystem.

Canonical accelerates AI Application Development with NVIDIA AI Enterprise

18 mars 2024 à 22:10

Charmed Kubernetes support comes to NVIDIA AI Enterprise

Canonical’s Charmed Kubernetes is now supported on NVIDIA AI Enterprise 5.0. Organisations using Kubernetes deployments on Ubuntu can look forward to a seamless licensing migration to the latest release of the NVIDIA AI Enterprise software platform providing developers the latest AI models and optimised runtimes.

NVIDIA AI Enterprise 5.0

NVIDIA AI Enterprise 5.0 is supported across workstations, data centres, and cloud deployments, new updates include:

  • NVIDIA NIM microservices is a set of cloud-native microservices developers can use as building blocks to support custom AI application development and speed production AI, and will be supported on Charmed Kubernetes.
  • NVIDIA API catalog: providing quick access for enterprise developers to experiment, prototype and test NVIDIA-optimised foundation models powered by NIM. When ready to deploy, enterprise developers can export the enterprise-ready API and run on a self-hosted system
  • Infrastructure management enhancements include support for vGPU heterogeneous profiles, Charmed Kubernetes, and new GPU platforms.

Charmed Kubernetes and NVIDIA AI Enterprise 5.0

Data scientists and developers leveraging NVIDIA frameworks and workflows on Ubuntu across the board now have a single platform to rapidly develop AI applications on the latest generation NVIDIA Tensor Core GPUs. For data scientists and AI/ML developers who would like to deploy their latest AI workloads using kubernetes, it is vital to leverage the most performance out of Tensor Core GPUs through NVIDIA drivers and integrations.

Fig. NVIDIA AI Enterprise 5.0

With Charmed Kubernetes from Canonical, several features are provided that are unique to this distribution including inclusion of NVIDIA operators and GPU optimisation features, composability and extensibility using customised integrations through Ubuntu operating system.

Best-In-Class Kubernetes from Canonical 

Charmed Kubernetes can automatically detect GPU-enabled hardware and install required drivers from NVIDIA repositories. With the release of Charmed Kubernetes 1.29, the NVIDIA GPU Operator charm is available for specific GPU configuration and tuning. With support for GPU operators in Charmed K8s, organisations can rapidly and repeatedly deploy the same models utilising existing on-prem or cloud infrastructure to power AI workloads. 

With the NVIDIA GPU operator, users can automatically detect the GPU on the system and install NVIDIA repositories. It also allows for the most optimal configurations through features such as NVIDIA Multi-Instance GPU (MIG) technology in order to leverage the most efficiency out of the Tensor Core GPUs. GPU-optimised instances for AI/ML applications reduce latency and allow for more data processing, freeing for larger-scale applications and more complex model deployment. 

Paired with the GPU Operator, the Network Operator enables GPUDirect RDMA (GDR), a key technology that accelerates cloud-native AI workloads by orders of magnitude. GDR allows for optimised network performance, by enhancing data throughput and reducing latency. Another distinctive advantage is its seamless compatibility with NVIDIA’s ecosystem, ensuring a cohesive experience for users. Furthermore, its design, tailored for Kubernetes, ensures scalability and adaptability in various deployment scenarios. This all leads to more efficient networking operations, making it an invaluable tool for businesses aiming to harness the power of GPU-accelerated networking in their Kubernetes environments.

Speaking about these solutions, Marcin “Perk” Stożek, Kubernetes Product Manager at Canonical says: “Charmed Kubernetes validation with NVIDIA AI Enterprise is an important step towards an enterprise-grade, end-to-end solution for AI workloads. By integrating NVIDIA Operators with Charmed Kubernetes, we make sure that customers get what matters to them most: efficient infrastructure for their generative AI workloads.” 

Getting started is easy (and free). You can rest assured that Canonical experts are available to help if required.

Get started with Canonical open source solutions with NVIDIA AI Enterprise 

Try out NVIDIA AI Enterprise with Charmed Kubernetes with a free, 90-day evaluation

How should a great K8s distro feel? Try the new Canonical Kubernetes, now in beta

Kubernetes revolutionised container orchestration, allowing faster and more reliable application deployment and management. But even though it transformed the world of DevOps, it introduced new challenges around security maintenance, networking and application lifecycle management. 

Canonical has a long history of providing production-grade Kubernetes distributions, which gave us great insights into Kubernetes’ challenges and the unique experience of delivering K8s that match the expectations of both developers and operations teams. Unsurprisingly, there is a world of difference between them. Developers need a quick and reproducible way to set up an application environment on their workstations. Operations teams with clusters powering the edge need lightweight high-availability setups with reliable upgrades. Cloud installations need intelligent cluster lifecycle automation to ensure applications can be integrated with each other and the underlying infrastructure.

We provide two distributions, Charmed Kubernetes and MicroK8s, to meet those different expectations. Charmed Kubernetes wraps upstream K8s with software operators to provide lifecycle management and automation for large and complex environments. It is also the best choice if the Kubernetes cluster has to integrate with custom storage, networking or GPU components. Microk8s has a thriving community of users; it is a production-grade, ZeroOps solution that powers laptops and edge environments. It is the simplest way to get Kubernetes anywhere and focus on software product development instead of working with infrastructure routines and operations.

After providing Kubernetes distributions for over seven years, we decided to consolidate our experience into a new distribution that combines the best of both worlds: ZeroOps for small clusters and intelligent automation for larger production environments that also want to benefit from the latest community innovations.

Canonical Kubernetes will be our third distribution and an excellent foundation for future MicroK8s and Charmed Kubernetes releases. You can find its beta in our Snap Store under the simple name k8s. We based it on the latest upstream Kubernetes 1.30 beta, which officially came out on 12 March. It will be a CNCF conformant distribution with an enhanced security posture and best-in-class open source components for the most demanding user needs: network, DNS, metrics server, local storage, ingress, gateway, and load balancer.

ZeroOps with the most essential features built-in

Canonical Kubernetes is easy to install and easy to maintain. Like MicroK8s, Canonical Kubernetes is installed as a snap, giving developers a great installation experience and advanced security features such as automated patch upgrades. Adding new nodes to your cluster comes with minimum hassle. It also provides a quick way to set up high availability.

You need two commands to get a single node cluster, one for installation and another for cluster bootstrap. You can try it out now on your console by installing the k8s snap from the beta channel:

sudo snap install k8s --channel=1.30-classic/beta --classic
sudo snap k8s bootstrap

If you look at the status of your cluster just after bootstrap – with the help of the k8s status command – you might immediately spot that the network, dns, and metrics-server are already running. In addition to those three, Canonical Kubernetes also provides local-storage, ingress, gateway, and load-balancer, which you can easily enable. Under the hood, these are powered by Cilium, CoreDNS, OpenEBS, and Metrics Server. We bundle these as built-in features to ensure tight integration and a seamless experience. We want to emphasise standard Kubernetes APIs and abstractions to minimise disruption during upgrades while enabling the platform to evolve.

All our built-in features come with default configurations that make sense for the most popular use cases, but you can easily change them to suit your needs.

Same Kubernetes for developer workstations, edge, cloud and data centres

Typical application development flows start with the developer workstation and go through CI/CD pipelines to end up in the production environment. These software delivery stages, spanning various environments, should be closely aligned to enhance developer experience and avoid infrastructure configuration surprises as your software progresses through the pipeline. When done right, you can deploy applications faster. You also get better security assurance as everyone can use the same K8s binary offered by the same vendor across the entire infrastructure software stack.

When you scale up from the workstation to a production environment, you will inevitably be exposed to a different class of problems inherent to large-scale infrastructure. For instance, managing and upgrading cluster nodes becomes complicated and time-consuming as the number of nodes and applications grows. To provide the smooth automation administrators need, we offer Kubernetes lifecycle management through Juju, Canonical’s open source orchestration engine for software operators.  

If you have Juju installed on your machine already, a Canonical Kubernetes cluster is only a single command away:

juju deploy k8s --channel edge

By letting Juju automate your lifecycle management, you can benefit from its rich integration ecosystem, including the Canonical Observability Stack.

Enhanced security posture

Security is critical to any Kubernetes cluster, and we have addressed it from the beginning. Canonical Kubernetes 1.30 instals as a snap with a classic confinement level, enabling automatic patch upgrades to protect your infrastructure against known vulnerabilities. Canonical Kubernetes will be shipped as a strict snap in the future, which means it will run in complete isolation with minimal access to the underlying system’s resources. Additionally, Canonical Kubernetes will comply with security standards like FIPS, CIS and DISA-STIG.

Critical functionalities we have built into Canonical Kubernetes, such as networking or dns, are shipped as secure container images maintained by our team. Those images are built with Ubuntu as their base OS and benefit from the same security commitments we make on the distribution.

While it is necessary to contain core Kubernetes processes, we must also ensure that the user or operator-provided workloads running on top get a secure, adequately controlled environment. Future versions of Canonical Kubernetes will provide AppArmor profiles for the containers that do not inherit the enhanced features of the underlying container runtime. We will also work on creating an allowlist for kernel modules that can be loaded using the Kubernetes Daemonsets. It will contain a default list of the most popular modules, such as GPU modules needed by AI workloads. Operators will be able to edit the allowlist to suit their needs.

Try out Canonical Kubernetes 1.30 beta

We would love for you to try all the latest features in upstream Kubernetes through our beta. Get started by visiting http://documentation.ubuntu.com/canonical-kubernetes

Besides getting a taste of the features I outlined above, you’ll be able to try exciting changes that will soon be included in the upcoming upstream GA release on 17 April 2024. Among others, CEL for admission controls will become stable, and the drop-in directory for Kubelet configuration files will go to the beta stage. Additionally, Contextual logging and CRDValidationRatcheting will graduate to beta and be enabled by default. There are also new metrics, such as image_pull_duration_seconds, which can tell you how much time the node spent waiting for the image.

We want Canonical Kubernetes to be a great K8s for everyone, from developers to large-scale cluster administrators.

Try it out and let us know what you think. We would love your feedback! You can find contact information on our community page.

We’ll also be available at KubeCon in Paris, at booth E25 – if you are there, come and say hi.

Meet Canonical at KubeCon + CloudNativeCon

Join Canonical, the publishers of Ubuntu, as we proudly return as a gold sponsor at KubeCon + CloudNativeCon EU 2024. Hosted by the Cloud Native Computing Foundation, the conference unites adopters and technologists from top open source and cloud-native communities. Mark your calendars for March 20-22, 2024, as we gather in Paris for this exciting event.

Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. 

Engaging with cloud-native enthusiasts and open source communities is a cornerstone of our mission. We’re excited to connect with attendees at KubeCon EU to share insights, foster collaboration, and contribute to this vibrant ecosystem.

BOOK A MEETING WITH US

Future-proof your infrastructure with the Canonical team at KubeCon EU

Build applications with ultra-small and secure containerisation that works on any infrastructure

Ubuntu containers are designed for modern software deployment. Our container portfolio ranges from an ecosystem of base OCI images, ultra-optimised chiselled container images to our long-term supported Docker images .   

While building applications, developers can rely on Ubuntu’s seamless containerisation experience from development to production, while getting timely updates, security patches and long term support with a consistent, predictable lifecycle and support commitment.

 Chiselled Ubuntu is where ultra-small meets ultra-secure. Developers can keep building with Ubuntu and rely on Chisel to extract an ultra-small, bitwise identical image tailored for production. No more worries about library incompatibilities, just seamless development to deployment.

Deploy and orchestrate your containers with the same Kubernetes: from your laptop through the cloud to the data centre

At Canonical, our aim is to streamline Kubernetes cluster management by removing unnecessary manual tasks. Be it the developer workstation, the data centre, the cloud or an IoT device- deploying applications on Kubernetes should not be a different experience just because the infrastructure changes. 

MicroK8s is a lightweight Kubernetes distribution that enables you to run enterprise-grade Kubernetes on your laptop, Raspberry Pi, or in any public cloud while consuming minimal resources. MicroK8s applies security updates automatically by default, and rolls them back on failure. 

That’s not all. We understand how maintaining Kubernetes upgrades can extract a toll on development efficiency. With MicroK8s you can upgrade to a newer version of Kubernetes in a single command.

Get your infrastructure ready for GenAI workloads

The Linux Foundation recently published a report that confirms that almost half the organisations prefer open source solutions for GenAI initiatives. Open source enables organisations to iterate faster and accelerates project delivery, by taking away the burden of licensing and tool accessibility. Yet, GenAI comes with several challenges, such as the need for extensive compute resources and associated costs . To optimise the use of their compute resources, organisations need efficient and scalable AI infrastructure, from bare metal to Kubernetes to their MLOps platforms. Our Kubeflow distribution, Charmed Kubeflow, is designed to run on any infrastructure, enabling you to take your models to production in the environment that best suits your needs. 

Canonical also works with leading silicon vendors like NVIDIA to optimise its open source solutions for AI infrastructure and enable efficient resource utilisation. This is especially relevant for large-scale deployments, where a large number of GPUs live under the same cluster. 

Increasing GPU utilisation on K8s clusters for AI/ML workloads

Join Maciej Mazur’s keynote at KubeCon EU on 22 March, to see how all layers of the stack can be optimised for  AI/ML workloads. The ratio increase of GPU sharing in the open source world will be the subject of his talk. During the talk, Maciej will cover some pitfalls, best practices, and recommendations based on four projects of similar scale.

From the hardware layer, which benefits from networking capabilities such as NVIDIA MIG to Kubernetes schedulers such as Volcano, Maciej will go through different opportunities organisations have to optimise their infrastructure for AI workloads and scale their projects.  MLOps platforms like Charmed Kubeflow go a level beyond and enable application layer optimisation. For instance, Charmed Kubeflow provides access to  frameworks like PaddlePaddle, which  distributes training jobs in a smarter way.

Deliver innovation at scale with reliable security patching and support

Whether you’re building new products or AI models, it’s crucial to ensure that the pace of innovation is not hindered by security vulnerabilities. That’s why Canonical’s open source solutions come with reliable security maintenance, so you can consume the open source you need at speed, securely.  

Meet our team to learn more about Ubuntu Pro,  our comprehensive subscription for open source software security. With Ubuntu Pro organisations reduce their average CVE exposure from 98 days to 1 day (on average). It enables development teams to focus on building and running innovative applications with complete peace of mind.

Join us at Booth E25

If you are attending KubeCon EU  in Paris between 20-22 March, make sure to visit booth E25. Our team of open source experts will be available throughout the day to answer all your questions. 

You can already book a meeting with our team member Teresa Lugnan using the link below.

BOOK MEETING

Telco-grade Sylva-compliant Canonical platforms

29 février 2024 à 07:00

In December 2023, Canonical joined the Sylva project of Linux Foundation Europe to provide fully open-source and upstream telco platform solutions to the project. Sylva aims to tackle the fragmentation in telco cloud technologies and the vendor lock-in caused by proprietary platform solutions, by defining a common validation software framework for telco core and edge clouds. This framework captures the latest set of technical requirements from operators when running telco software workloads as cloud native functions (CNF), such as 5G core microservices and Open RAN software.

Sylva’s mission is to support 5G actors in their efforts to drive convergence of cloud technologies in the telco industry – taking into account interoperability across 5G components, TCO with open source software, compliance with regulations and adherence to high security standards. CNFs from vendor companies can then be operated and validated on reference implementations of the cloud software framework defined by Sylva. 

To test and validate telco vendor CNFs, Sylva has deployed cloud-native platforms based on a multi-deployment model as Kubernetes (K8s) clusters on bare metal or OpenStack. These CNFs often require telco-grade enhanced platform features like SR-IOV, DPDK, NUMA, and Hugepages, along with support for a range of container networking interfaces (CNI). In this blog, we explain how Canonical’s Sylva-compliant infrastructure solutions satisfy these requirements.

Canonical’s open source platform solutions for Sylva

Canonical’s product portfolio is closely aligned with Sylva’s objectives and strategies. It provides a variety of features that Sylva aims to include in the latest modern telecom infrastructure deployments. The project has already deployed validation platforms running on Ubuntu, and also leverages hardened Ubuntu 22.04 images.

Canonical Kubernetes is a CNCF conformant enterprise-grade Kubernetes with high-availability. It delivers the latest pure upstream Kubernetes, which has been fully tested across a variety of cloud platforms of all form factors, including provisioned bare metal systems, Equinix Metal and OpenStack, and architectures including x86, ARM, IBM POWER and IBM Z. It supports the Cluster API (CAPI), which is mandated by Sylva to provision Kubernetes. With CAPI, an operator can update Kubernetes clusters through rolling upgrades without disruption and initialise their workloads. 

For telco edge clouds, Canonical Kubernetes can scale as a lightweight Kubernetes solution with self-healing, high-availability and easy clustering properties. This provides a minimal footprint for more energy-efficient operations at edge clouds. It can equivalently scale up at regional and central clouds where a larger footprint is needed in a data centre. 

Based on Canonical Kubernetes, Canonical’s Cloud Native Execution Platform (CNEP) aligns with the Sylva platform features and architectural design. With CNEP, Kubernetes clusters are offered to telco operators on bare metal hardware, where hardware provisioning and cluster operations can both be controlled and orchestrated via Cluster API centrally. 

CNEP’s set of supported features makes it ideal for operators who want to adopt a Sylva compliant platform with validated telco CNFs from vendors, e.g. 5G core and Open RAN as well as MEC CNFs, such as content delivery networking (CDN) software. The platform software stack fully supports the Sylva design from bare metal to containers, with capabilities including:

  • Bare metal provisioning operations automated via Cluster API
  • Enhanced platform awareness features, such as SR-IOV, DPDK, CPU pinning, Hugepages and NUMA
  • Ubuntu operating system with CIS security hardening, compliant with FIPS, NIST 800-53, PCI DSS, DISA STIG, ISO 270001 standards
  • A real-time kernel for mission-critical applications and latency-sensitive telco workloads, such as Open RAN DU and 5G UPF
  • Fully upstream and CNCF-compliant Canonical Kubernetes that provides operators with an industry-standard and production-grade Kubernetes container orchestration platform with multi-tenancy features, exposing Cluster API
  • A wide range of CNIs, required by vendor CNFs and the Sylva validation framework, such as Cilium, Calico, Multus, and others
  • Ceph as a backbone for distributed multi-tenant storage with configurable data protection and encryption
  • Full observability, with support for the Canonical Observability Stack, consisting of popular open source software tools Grafana, Prometheus, and Loki, supporting logging, monitoring and alerting
  • Role based access control (RBAC) features at platform, Kubernetes and bare metal provisioning levels

In addition to Canonical Kubernetes and our CNEP solution, Canonical OpenStack supports the advanced platform features that Sylva validation platforms need, including SR-IOV, DPDK, CPU-pinning, NUMA, Hugepages, PCI passthrough, and NVIDIA GPUs with virtualisation. It has native support for both Ceph and Cinder as storage components, both of which are included in the Sylva platform design and roadmap.

About the Sylva project 

Aligned with telco operator needs, Sylva envisions cloud-native telco software execution on Kubernetes platforms. Operators look to deploy Kubernetes clusters at their telco edge, regional and core clouds, providing them with a uniform cloud-native execution environment.

Modern telco infrastructure is distributed, deployed across multiple locations with tens of thousands of far-edge clouds, thousands of near-edge clouds and tens of regional clouds. This calls for deploying and managing a large number of Kubernetes workload clusters at geographically dispersed locations, controlled by management cluster(s) located at regional and central clouds. To tackle this challenge, Sylva has defined a software framework for telecom software platforms based on Kubernetes that are deployed on a large scale. 

Modern telco clouds must also support a set of enhanced platform features often required by telco CNFs. Towards this, the project’s validation platforms verify that (i) the deployment platform supports the requirements of a CNF in test, and (ii) the CNF can correctly deploy on the platform and successfully consume these platform features.

Kubernetes cluster management

Sylva follows a declarative approach with a GitOps framework to manage a high volume of physical nodes and Kubernetes clusters. Infrastructure lifecycle management covers Day 0 (build and deploy), Day 1(run), Day 2(operate) operations, with fault management, updates and upgrades. The project provides automation with CI/CD pipelines where a set of scripts produce and maintain Helm charts that include Kubernetes deployment and operational resource definitions. 

A dedicated work group, called Telco Cloud Stack, has developed tooling for cluster deployment and lifecycle management (LCM). This tooling is based on the Flux GitOps tool, which keeps clusters and infrastructure components in sync with their definitions in Git repositories. 

To manage the Kubernetes clusters and bare metal provisioning with this tool-chain, Sylva leverages Cluster API (CAPI).

Validation of telco CNFs on Sylva platforms

CNFs from different vendors are validated on Sylva platforms for the interoperability between the CNFs and the platforms. The project’s validation program ensures that telco operators who deploy platforms with software components that follow the Sylva reference implementations gain two benefits: (i) verified telco CNF functionality on their cloud platforms, and (ii) verified support for the telco-grade platform features which these CNFs require.

The project has a dedicated work group called the Sylva Validation Center, which tests deployment of vendor CNFs on the project’s validation platforms, where Kubernetes runs on either bare metal hardware or on OpenStack. 

The validation of a CNF under test on a Sylva platform starts with identifying the necessary set of platform capabilities that the CNF requires, including CNIs, and then installing and configuring the platform with those capabilities. Once the platform has been configured, a first set of smoke tests are run to verify the platform’s support for this set of features. Once the CNF has been deployed on the platform, some functional tests are performed to verify that the deployment is correctly done, and all the necessary Kubernetes pods are healthy in ready state. Finally, operators may run additional tests on CNFs if deemed necessary.

Canonical’s open source software and solutions meet the platform feature requirements by telco CNFs as tested by the Sylva Validation Center, such as SR-IOV, Multus CNI, and Real-time Linux. Validating telco CNFs on Canonical’s platforms for Sylva will also ensure that our platforms with support for these advanced features are verified by Sylva to run these CNFs.

Sylva platform roadmap

In its roadmap for 2024, project Sylva is planning to add support for new features in its validation platforms, such as near real-time Linux, immutable operating system for far-edge clouds and GPU offloads. Canonical’s software platforms follow Sylva’s vision and have support for these features already today, with Real-time Ubuntu, Ubuntu Core immutable OS, support for precision time protocol (PTP) and more.

Canonical is committed to making Sylva a benchmark platform for executing telco network functions. This commitment entails Canonical’s contribution to the infrastructure-as-code scripts that compose Sylva, to enable our open source solutions for Sylva, and to align with the evolving technical scope of the project.

Summary

Linux Foundation Europe’s Sylva project has defined a platform architecture for validating cloud-native telco network functions on Kubernetes. This provides telco operators with guidance on how to achieve a uniform cloud infrastructure, covering edge, regional and central cloud locations, ultimately aiming at multiple objectives, including cost reduction, interoperability, automation, compliance and security.

The project emphasises the central role of open source platforms with standard and open APIs, which brings a modular approach when designing and deploying telco cloud systems. 

Canonical offers fully upstream and telco-grade open source solutions that align with the Sylva platform architecture, including Canonical Kubernetes and Canonical OpenStack. We also engineered an innovative platform solution, CNEP, which is fully inline with the Sylva visions on multi-tenancy, multi-site Kubernetes clusters,  bare metal with full automation of hardware provisioning and cluster lifecycle management performed over industry-standard Cluster API.

Contact us

Canonical provides a full stack for your telecom infrastructure. To learn more about our telco solutions, visit our webpage at ubuntu.com/telco.

Further reading

Canonical joins the Sylva project

Bringing automation to telco edge clouds at scale

Canonical Kubernetes 1.29 is now generally available

Fast and reliable telco edge clouds with Intel FlexRAN and Real-time Ubuntu for 5G URLLC scenarios

Meet Canonical at Mobile World Congress Barcelona 2024

26 janvier 2024 à 09:01

The world’s largest and most influential telecommunications exhibition event, Mobile World Congress (MWC), is taking place in Barcelona on 26-29 February 2024. Canonical is excited to join this important annual event once again and meet the telecom industry. 

Telecommunications is a key sector for Canonical. We offer solutions for private, public and hybrid/multi cloud environments, with a consistent experience across the entire telecom spectrum, from core clouds to the edge, with a single set of tooling. Built with the same philosophy as Ubuntu – secure, trusted and production-grade open source backed by full operations support – our solutions are fully upstream and integrate the latest technology advancements that telco leaders require to deliver best-in-class services to their customers. 

We are looking forward to meeting you at MWC 2024. Come and speak with our experts to learn how we can help you in your journey to cost-effective, secure and trusted open source telecom solutions for your infrastructure.

Hot topics in telco

To meet today’s customer expectations, telecom operators require flexible, scalable and agile operations across the many service types that make up a modern mobile network.

At this year’s MWC event in Barcelona, Canonical’s team will explain how you can elevate your telecom infrastructure with the latest innovations in cloud-native technologies and modernise your telco clouds with open source. These strategies will empower you to meet and exceed customer expectations with repeatable and reliable deployments.

Automation at scale for telco edge clouds with CNEP

We have been listening to our telco customers to understand their needs in delivering cost-effective modern edge clouds for their infrastructure that they can rely on. Canonical is proud to offer a new holistic solution, Cloud Native Execution Platform (CNEP) to meet these needs precisely at telco edge clouds.

With CNEP, we deliver the ideal software stack for telco edge clouds with automation in place, based on fully upstream and CNCF certified Kubernetes running on bare metal hardware for best performance. It brings all essential open source components together, with the aim of achieving high performance in data processing and delivery, whilst ensuring platform security and efficiency with Ubuntu Pro.

At MWC, our team will explain how operators can achieve scalable and repeatable deployment of edge clouds with CNEP. For Open Radio Access Network (RAN) readiness, CNEP is the ideal RAN platform, bringing all the technology features that cloud-native Open RAN components require. CNEP is also tailored for best performance and security assurance for distributed compute and multi-access edge computing (MEC) applications, enabling businesses to run their telco workloads on 5G edge networks.

Real-time Ubuntu for ultra-reliable and low-latency communications

Canonical has been working with all major silicon hardware vendors, such as Intel, to deliver the highest levels of performance and security to telco networks and applications. 

We have been running advanced engineering programs with the aim of enabling the latest innovations in silicon hardware in telco software infrastructure at a rapid pace, with quick software release cycles. As part of our collaboration with Intel, we have integrated Intel FlexRAN in Ubuntu real-time kernel for telco applications and networking software, which has enabled real-time processing at both operating system and silicon levels.

At this year’s MWC, we will explain how Ubuntu Pro brings real-time data processing capabilities to the telco edge for mission-critical operations and also ensures confidential computing for the most-sensitive telco workloads.

Sustainable telco edge infrastructure with an energy-efficient system stack

Telecom networks will increasingly deploy edge cloud sites in the journey to distributed and flexible cloud-native operations. This requires support for several features across the hardware and software stack to make sure that platforms are energy and cost efficient. From container images to bare metal hardware automation, Canonical’s edge cloud stack is equipped with features that ensure sustainable operations.

In Barcelona, we will explain how our open source software stack can deliver optimal deployments on telco edge clouds and help operators meet their sustainability goals.

Demos

At MWC 2024, you will get the chance to see our technical team demonstrate Canonical’s CNEP solution. This is a great opportunity for all players in the telco ecosystem to see how we meet sector requirements on cloud-native operations at the telco edge with automation. In our demo, the Canonical team will run CNEP on Intel’s 4th Generation Xeon Scalable Processor, bringing the acceleration capabilities provided by Xeon to large-scale edge network rollout for cost-efficient Open RAN deployments.

CNEP’s open and upstream APIs along with Canonical’s observability stack and telemetry solutions enable machine learning algorithms to assist edge cloud operations. The Canonical team will demonstrate how our AI/ML platform solutions can be used to boost the effectiveness of distributed computing applications running on telco edge clouds. We will show how a multi-cloud data platform can be formed for various data types collected from a telecom network. We will also show ML-based anomaly detection and LLM to summarise and explain collected data from the network. 

Come and meet us at MWC 2024

If you are interested in building your own modern telecom infrastructure and migrating to open source with cost-effective, secure and trusted solutions, Canonical can help you. We provide a full stack for your telecom infrastructure, enabling secure, trusted, flexible, optimised, automated and low-touch operations.

To learn more about our telco solutions, meet us to discuss your telecom needs at MWC Barcelona 2024, and visit our webpage at ubuntu.com/telco.

If you’re unable to find a suitable time, please reach out to Mariam Tawakol <mariam.tawakol@canonical.com> or Jacob Boe <jacob.boe@canonical.com>. Let them know your availability and what you’re interested in, and they will set up a meeting for you.

Further reading

Canonical joins Open Networking Foundation

Fast and reliable telco edge clouds with Intel FlexRAN and Real-time Ubuntu

Bringing automation to telco edge clouds at scale

How telcos are building carrier-grade infrastructure using open source

❌
❌