Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Meet our federal team at Space Tech Expo USA

We’re excited to announce our participation in Space Tech Expo 2024, which will take place from May 14 to 15 in Long Beach, California. As our collaboration with Space agencies strengthens, we’re looking forward to meeting our partners and customers on-site to discuss the critical topics for 2024: cybersecurity, artificial intelligence, and open-source innovation.

AI/ML Solutions in Space 

Space organizations invest heavily in AI, aiming to make interstellar travel successful. Agencies kickstart initiatives with different use cases in mind, such as mission planning, autonomous decision-making, resource optimization, and space debris control looking for tooling that enables them to run AI at scale. 

Unlocking real-time space tracking with AI/ML supercomputing

Just recently we worked with the University of Tasmania and Firmus to unlock real-time space tracking with AI/ML supercomputing. 

The University of Tasmania (UTAS) is among the leading providers of space tracking in the southern hemisphere.  The number of new satellites entering the Earth’s orbit is increasing at a near-exponential rate, exceeding 10% growth per year as of 2022, so the university’s work is critical. However, traditional CPU-based data processing had led to a bottleneck that prevented comprehensive, real-time object monitoring. 

What UTAS needed was a modernised infrastructure that could support its immense data processing requirements without drastically inflating costs. The university’s solution was to migrate its space tracking software to Supercloud – a cost-effective and highly sustainable supercomputing platform from Firmus, built using Canonical OpenStack and Canonical Kubernetes, and capable of supporting the most data-intensive AI/ML workloads. 

With Firmus supercomputing based on Canonical infrastructure, UTAS has successfully solved its data processing bottleneck. It has also positioned itself to be able to take advantage of additional MLOps tooling, such as Charmed Kubeflow, that can run on top of Canonical Kubernetes to further simplify AI/ML workflows.

“The UTAS project will help fill the global shortfall in space tracking coverage, and will improve the safety of orbital infrastructure and secure the future of space missions. We are pleased to play our part by contributing Firmus compute resources to support much needed real-time space tracking”. — Peter Blain, Director of Product & AI, Firmus

Explore the case study here. 

To provide the most complete AI solutions to Space agencies, we’ve partnered with the leading hardware, silicon and cloud providers, such as NVIDIA, DELL, AWS, Google Cloud, HPE, Intel, Azure and more. 

Sending the artwork to the International Space Station 

 In 2022, we teamed up with Lonestar to send artwork to the ISS. The artwork is part of a global STEAM initiative and a groundbreaking, immutable data storage, edge processing demonstration currently running aboard the International Space Station (ISS) and led by stealth start-up Lonestar, open-source leader Canonical, and leading space mission integrator Redwire Corporation, a leader in space infrastructure for the next generation space economy. 

Read the full story here. 

Cybersecurity with Ubuntu Pro

Open source technology has been used in space technology for years and it is no surprise that cybersecurity is a key concern for the industry. With our commitment towards securing open source, last year, we announced the general availability of Ubuntu Pro subscription.

It secures an organisation’s Linux estate from OS to the application level. Pro is available on-prem, in the cloud and air-gapped environments, automating security patching, auditing, access management and compliance. Ubuntu Pro delivers FIPS compliance and automation for security standards such as DISA’s Ubuntu STIG, and CIS hardening via the Ubuntu Security Guide (USG).

One of the growing concerns for 2024 is application security. Many open-source packages for applications and toolchains exist in a space with no guarantee or SLA for security patching. With Ubuntu Pro, we secure over 23,000 + open source applications.

If the topic sounds interesting to you, schedule a meeting with our Federal Director Kelley Riggs, for an in-person discussion at Space Tech Expo. 

From sales development to product: Adrian’s career progression at Canonical

Career progression is crucial to employee engagement and retention. Canonical encourages employees to develop their own  personal development journeys,  giving them the chance to apply internally for roles they are passionate about even if they come from different backgrounds.  We have many great examples. Today we will be featuring the story of Adrian Matei.

Adrian is a Product Manager here at Canonical, leading the strategy of our Managed Solutions offering. His team ensures that customers receive managed services that are in tune with market expectations and effectively support business goals. We asked Adrian more about his story with Canonical and how his career progressed. 

Adrian’s career journey with Canonical

When did you start with Canonical and what was your original role?

I joined Canonical in the summer of 2022, as a Sales Development Representative for the French markets. It was my first venture into sales development, which I wanted to undertake because of my natural extraversion and my love for the French language. 

What did you switch to and when?

I switched to Product Management in July 2023, so just over a year after joining. 

What was the reason behind that?

I’ve got a background in business management for the tech sector, and so I’m naturally curious about ways in which IT products can be improved to become more competitive. I had been in touch with my predecessor – whom I salute in case he’s reading this – and I found his role fascinating: managing not a standalone product, but a complex commercial and technical set of protocols that bring Canonical’s entire portfolio to customers. 

Because of my background and interests, I had developed fruitful relationships with several other Product Managers, who encouraged me to apply. Apply I did, excitedly.  With a bit of luck and a lot of work, I got the role.  And I could not be happier, because while I did enjoy sales, product management comes more naturally to me, and gives me a significantly more profound sense of impact and satisfaction. 

What was the process and how long did it take? 

The process was thorough due to the high specialization required by product management within our company, but this allowed me to get to know the team better. Product Management is at the core of Canonical’s identity. Our team of Product Managers brings together a set of very talented individuals, and it’s a demanding discipline which requires thought-leadership, an analytical mindset, solid stakeholder management and a strategic outlook. Therefore, in order to verify my technical and commercial eligibility for this team, multiple people had to interview me. 

Ihad to prepare presentations, conduct market research, and ultimately form an initial hypothesis about where the product should go with my guidance. Understandably it took some time to transition into the team. 

Do you advise readers to consider a career at Canonical? 

Absolutely! Canonical to me is a lovely workplace. It’s filled with very passionate people, and there’s always something new to learn and do. In terms of benefits and work/life balance, it’s perfect for me, because I enjoy both the flexibility of a remote company and the many opportunities to travel (I really love our sprints all over the world). It’s also very stimulating and at times intense. As someone motivated by impact, Canonical gives me the feeling that I’m doing something important. I think the opportunities are endless here. 

Join the team 

Career progression is an area that Canonical puts a lot of attention to and Adrian’s story is evidence of that. In the next article of this series, we’ll explore another story. In the meantime, browse current openings to find the most interesting role for you and apply. Remember, it doesn’t have to be your last role at Canonical 🙂  

Canonical releases Landscape 24.04 LTS

With 12 years of support, Landscape 24.04 LTS adds snap and repository management controls to a modernised systems management web portal and API.

Screenshot of the new Landscape Dashboard

London, 30 April 2024.

Today Canonical announced the availability of Landscape’s first LTS release. Landscape 24.04 LTS features a new versioned API, a new web portal with accessibility and performance in mind, and intuitive controls for software distribution. Landscape 24.04 LTS comprises Landscape Server and Landscape Client. With a modernised backend and web portal in place, engineering teams can work efficiently, focusing on patches and new features.

Predictable release cadence and 12 years of support for LTS versions

Building on Canonical’s commitment to reliability, Landscape releases going forward will align with Ubuntu LTS and interim releases for predictable security coverage, feature patches, and bug fixes.

Landscape Server 24.04 can be installed on Ubuntu 22.04 LTS and Ubuntu 24.04 LTS releases with Ubuntu Pro. Landscape Server 24.04 is compatible with the previous four Ubuntu LTS releases (Ubuntu 16.04 LTS onwards), and will manage future Ubuntu releases including Ubuntu 26.04 LTS.

Like Ubuntu 24.04 LTS, this Landscape release gets a 12 year commitment for security maintenance and support. Landscape 24.04 LTS will get five years of bug fixes and incremental feature patches until August 2029. Ubuntu Pro subscribers can continue using Landscape 24.04 LTS after these 5 years for a total of 12 years, with the Legacy Support add-on.

A new web portal built with Canonical’s Vanilla Framework

Vanilla Framework provides consistent and uniform design patterns across Canonical’s products. Landscape joins MAAS, LXD UI, and others with a responsive React JS driven user interface. This web portal is built using a new versioned API serving JSON data. This API enhancement ensures seamless integration for developers, offering a forward-looking assurance that applications developed with a particular API version will remain robust and reliable, regardless of future updates to Landscape and its accompanying API endpoints.

The Monitoring feature from the legacy Landscape web portal has not yet been migrated to Landscape 24.04 LTS, yet. Monitoring will arrive as an incremental patch for Landscape 24.04 LTS with a modern charting library, a monitoring API, and companion documentation.

Lastly, the web portal provides a significant improvement in Lighthouse scores for Accessibility. The dashboard’s accessibility scores as measured by Lighthouse improved from 70% to 95%. Landscape 24.04 LTS has a web portal which is accessible to users with deficiencies in colour vision, complete colour blindness, and other visual impairments.

Save terabytes in storage and bandwidth with point-in-time repository snapshots

An overview of the repository management experience in the new Landscape web portal.

Landscape’s new web portal includes an intuitive point-and-click repository mirroring experience, and the repository snapshot service is available as a source when mirroring repositories. In late 2023, Canonical became the first Linux provider to integrate a repository snapshot service with Microsoft Azure’s update mechanisms. Landscape 24.04 LTS brings this simplified and safe deployment practice capability on-premises, and to mixed and hybrid cloud environments.

Benefits of Landscape’s repository snapshot service include predictable updates, consistency across deployments, and simplified repository mirroring, providing improved resilience and security for Ubuntu workloads.

Beyond the conveniences afforded to system administrators, the repository snapshots implementation also saves over 100 terabytes of disk space and network throughput, for organisations making complete repository mirrors every week. Canonical’s on-demand repository snapshot capability extends back to February 2023 for non-ESM (Expanded Security Maintenance) repositories. This innovation frees storage and network resources, because scheduled mirroring and archival of these mirrors becomes unnecessary.

Snap management for Ubuntu and Ubuntu Core

Beyond managing Ubuntu interim and LTS releases, Landscape 24.04 LTS also manages Ubuntu Core, Canonical’s snap based, immutable and strictly-confined operating system. A strictly confined Landscape Client snap package provides snap package management, remote script execution, monitoring and inventory capabilities to Ubuntu, for anyone interested in consuming the latest Landscape Client as a snap package.

Snap management capabilities also exist in the Landscape Client Debian package, available in the Main repository for Ubuntu 24.04 LTS, and in ppa:landscape/self-hosted-24.04 for previous versions of Ubuntu.

Distribution of updated snap revisions is controlled through the Snap Store, which organisations can self-host as a snap store proxy, or as a brand store if there is a need to distribute proprietary non-public snaps within the organisation. Snap management in Landscape 24.04 LTS can add, remove, update, and pause updates from Snap Store, snap store proxy, and brand stores.

Landscape has historically provided fine grained management of Debian packages installed through the apt package manager. With Landscape 24.04 LTS, similar management capabilities arrive for snap packages, with consideration for revisions and channels, which are specific to the snap ecosystem. By default, snap packages self-update through transactional over-the-air updates, and have the ability to rollback automatically if the upgrade fails. Organisations and individuals interested in uniformity across machines can pin revisions of a snap to machines, and ensure consistency between machines that must be uniformly configured.

Next steps

About Canonical

Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone.

Learn more at canonical.com.

The biggest use cases for AI in Automotive (that aren’t just self-driving cars)

A study of 4 major use cases of AI in cars

In this fast-paced age of technological evolution, Artificial Intelligence (AI) emerges as the key catalyst driving profound shifts in the automotive sector. From smart vehicle design to customised in-car interactions, AI is reshaping every aspect of transportation, ensuring safer, more effective, and environmentally friendly journeys for both drivers and passengers.

In this blog, we’ll have a look at the four most promising use cases for AI in the automotive industry.

Intelligent vehicle lifecycle management

Innovative vehicle design, material use, and manufacturing processes

AI-powered generative design algorithms are transforming how vehicles are conceptualised and engineered, pushing the boundaries of creativity and efficiency. These algorithms optimise vehicle structures for performance, safety, and sustainability by analysing vast datasets and exploring numerous design iterations. 

Moreover, AI is revolutionising material selection: manufacturers are harnessing its capabilities to identify the most suitable materials for each component, balancing strength, weight, and environmental impact. This results in vehicles that are lighter, more fuel efficient, more technologically advanced, and more sustainable to produce — contributing to a greener industry and future.

Predictive maintenance and diagnostics

AI is reshaping the landscape of vehicle maintenance through predictive maintenance systems that redefine how issues are identified and addressed.

Some cars have over 100 embedded sensors, tracking everything from engine fuel-oxygen mixes and tyre pressure, to component temperatures and orientation. AI algorithms can use the data from these sensors to predict mechanical and electrical faults before they happen, opening up the door for proactive, preventative maintenance.

As a result, vehicle downtime is minimised, maintenance costs are reduced, and overall reliability is significantly enhanced, ensuring a smoother and more seamless ownership experience for drivers.

Supply chain enhancements

AI isn’t just making cars lighter and more efficient – it’s also making them easier to build and send to showrooms and car lots. Car manufacturers can use AI algorithms to analyse large amounts of data related to demand forecasting, inventory management, and logistics operations; this data will reveal ways to streamline supply chain processes and improve overall manufacturing efficiency. 

AI-driven supply chain enhancements enable OEMs (Original Equipment Manufacturers) to anticipate demand fluctuations, optimise inventory levels, and minimise lead times, thereby reducing costs and improving responsiveness to market dynamics. Moreover, AI enables predictive analytics for proactive risk management, allowing manufacturers to identify potential disruptions and mitigate them before they impact production. This helps car companies be more flexible, resilient, and competitive in today’s changing market. 

One example of this in action is the dispatch of parts across a vast network of locations, including repair shops and warehouses. AI algorithms analyse a multitude of factors, including weather data, customer repair habits, seasonal trends, and inventory levels, to predict demand and optimise part shipments. By consolidating information from various sources and through predictive analytics, AI enables automotive companies to proactively manage their supply chains, ensuring timely delivery of parts while minimising costs and maximising efficiency. 

This approach mirrors strategies employed by agricultural companies, which rely on AI to optimise the distribution of repair parts for harvesting machines, enhancing overall supply chain resilience and performance.

Enhanced in-car experience and connectivity

In the automotive field, it’s not just the vehicle that’s being improved by AI, but the human experience of that vehicle. AI is revolutionising the in-car experience, offering a seamless blend of comfort, convenience, and connectivity for drivers and passengers alike.

In-car experience personalisation

Gone are the days of one-size-fits-all vehicle settings. With AI, the in-car experience becomes highly personalised, adapting to the individual preferences and needs of each occupant. By analysing data on driver behaviour, environmental conditions, and historical usage patterns, AI algorithms adjust various settings within the vehicle to create a unique driver-specific experience. 

Imagine sitting in a brand-new car, or in your uncle’s car. Within seconds, the steering wheel height, mirrors, seat, and headrest adjust to put you at the perfect driving height with optimal vision of everything around you. The air conditioning turns on at a perfect 19 degrees (which your uncle thinks is a waste of fuel). The car radio imports your favourite stations as preset channels. The in-car GPS suggests preferred routes home for you based on your previous journeys and the current traffic. That’s the power of AI-driven user experience. 

AI ensures that every journey is as comfortable and enjoyable as possible. This level of personalisation not only enhances the overall driving experience but also fosters greater driver satisfaction and loyalty to automotive brands.

Natural Language Processing for smarter assistants

In today’s world, you’re more connected than ever. There’s just one problem: it’s illegal in most countries to use the thing that connects you (namely, your phone) while driving. This simple fact makes AI-powered natural language assistants a must-have companion. These assistants enable hands-free interaction with vehicle systems, allowing drivers to perform a wide range of tasks using voice commands alone. 

Whether it’s making phone calls, sending text messages, adjusting navigation settings, or controlling entertainment options, AI-powered natural language assistants make driving safer and more convenient. These assistants seamlessly integrate with other services and devices, such as calendars, emails, and smartphones, ensuring a connected and flawless experience for drivers. Imagine this: your AI companion remembers the 3pm text you got from your partner to pick up milk, and automatically adds a stop at the nearest convenience store that is listed as open and sells your usual purchased brand of organic 3.5% full-fat, free-range fresh milk. By harnessing the power of AI, natural language assistants transform the car into a true extension of the driver’s digital life, enhancing productivity and connectivity on the go.

Advanced mobility solutions and urban planning

AI goes even further than the car and its driver; at a macro scale, its data and feedback can improve roads, cities, and even the environment itself. As urbanisation continues to accelerate and cities confront growing challenges related to congestion, pollution, and limited infrastructure, AI emerges as a key enabler of advanced mobility solutions and urban planning strategies.

Multimodal AI Assistant and Cross-App Integration

The integration of AI-powered multimodal assistants marks a significant advancement in mobility solutions. These assistants are designed to seamlessly facilitate transitions between different modes of transport, offering users a harmonious and intuitive experience. Capable of processing various inputs such as voice commands, images, and video feeds, these assistants serve as versatile interfaces, connecting users with their vehicles and surrounding environments.

By analysing vast amounts of data, including traffic patterns, congestion hotspots, and user preferences, these assistants not only assist drivers but also contribute to the collective improvement of transportation systems. For instance, their recommendations for nearby points of interest (POIs) like attractions and services aren’t just about enhancing individual journeys. They are also about facilitating better traffic distribution, reducing congestion, and ultimately creating a more harmonious and enjoyable travel experience for everyone on the road.

Urban transport optimisation

In densely populated urban areas, efficient transport systems are essential for maintaining mobility and reducing environmental impact. AI plays a central role in optimising urban transport planning and infrastructure, using data analytics and predictive modelling to improve efficiency and sustainability. 

By analysing massive datasets, including traffic patterns, public transit schedules, and environmental conditions, AI algorithms identify opportunities for optimisation, such as route adjustments, traffic signal synchronisation, and modal shift incentives. Additionally, AI facilitates dynamic pricing and demand-responsive services, ensuring that transport networks remain responsive to changing needs and preferences. Through urban transport optimisation, AI enables cities to alleviate congestion, reduce emissions, and enhance overall mobility, creating more pleasant and sustainable urban environments.

Travel booking and mobility services

AI-driven travel booking, ride-hailing platforms and Mobility as a Service (MaaS) solutions offer individually curated and integrated transportation options, adjusting to individual preferences and needs. With the help of AI algorithms, these platforms analyse user data, historical travel patterns, and real-time availability to offer customised travel itineraries, including public transit, ride-sharing, and micromobility options. These plans extend beyond mode selection to include nuanced considerations such as off-peak travel calculations, surge pricing predictions, and custom suggestions for optimal travel experiences. For instance, AI could recommend travel options based on a user’s preference for a car with ample luggage space, in-car entertainment features, or the most direct route with the fewest stops.

Additionally, AI optimises travel routes and schedules, taking into account factors such as traffic conditions, weather forecasts, and user preferences, to ensure efficient and stress-free journeys.

By streamlining travel booking and offering tailored mobility solutions, AI enhances the overall urban mobility experience, making it easier and more convenient to navigate cities and reach destinations.

Simulation and testing for autonomous driving

The pursuit of autonomous driving (AD) stands at the forefront of automotive technology, promising safer, more efficient, and more convenient transportation solutions. Central to this endeavour is the use of AI to assist in rigorous simulation and testing processes, ensuring the reliability and safety of autonomous vehicles.

Complex AD simulation scenarios

The development and validation of Advanced Driver Assistance Systems (ADAS) and autonomous driving technologies require extensive testing under diverse and complex scenarios.

AI-driven simulation platforms play a crucial role in this process, generating realistic and dynamic environments that mimic real-world driving conditions. These simulations encompass a wide range of scenarios, including varying weather conditions, road layouts, traffic patterns, and unforeseen events, allowing developers to evaluate the performance of autonomous systems in virtually any situation. 

By relying on AI algorithms, these simulations continuously evolve and adapt, incorporating new data and insights to enhance their realism and effectiveness. As a result, developers can iteratively refine and optimise autonomous driving algorithms, accelerating the journey towards safe and reliable autonomous vehicles.

AI and AD Integration

At the core of AD systems lies the integration of AI algorithms, enabling vehicles to perceive, interpret, and respond to their surroundings in real-time. AI processes data from various sensors– including cameras, LiDAR (Light Detection and Ranging), and radar– to identify objects, detect obstacles, and predict their movements. 

Through advanced machine learning processes, AI algorithms continuously learn and improve, enhancing the accuracy and reliability of autonomous driving capabilities. Additionally, AI facilitates decision-making in complex and dynamic environments, enabling vehicles to navigate safely and efficiently in any driving conditions, no matter how stormy or congested they are. 

By integrating AI into autonomous driving systems, automotive manufacturers are creating even safer self-driving cars that can share the road with the rest of us.

AI for impactful and smarter automotive innovations

In conclusion, the integration of AI into the automotive industry has ushered in a new era of innovation, transforming every facet of the driving experience. From revolutionising vehicle design and maintenance to optimising supply chains and enhancing urban mobility, AI is driving (pun intended) unprecedented advancements that promise safer, more efficient, and more sustainable transportation solutions.

The applications of AI discussed in this blog illustrate the breadth and depth of its impact on the automotive sector. AI-driven design and manufacturing processes are pushing the boundaries of creativity and efficiency, while predictive maintenance systems are ensuring the reliability and longevity of vehicles on the road. In-car experience personalisation and natural language assistants are redefining how drivers interact with their vehicles, while advanced mobility solutions and urban planning strategies are reshaping the way we navigate and interact with cities. Furthermore, AI’s role in optimising supply chains and facilitating autonomous driving technologies underscores its potential to revolutionise the entire automotive ecosystem. By harnessing the power of AI, automotive companies can unlock new opportunities for efficiency, sustainability, and innovation, driving us towards a future where mobility is smarter, safer, and more accessible for all.

As we look ahead, it is clear that AI will continue to play a leading role in shaping the future of transportation.

What’s New in Ubuntu 24.04 LTS for Microsoft/Azure Users

Canonical recently announced the release of Ubuntu 24.04 LTS, codenamed “Noble Numbat”. This update underscores Ubuntu’s ongoing commitment to enhancing performance and security, focusing on optimizing developer productivity. The latest version features an optimized Linux kernel 6.8 and significant system management upgrades as detailed in the release notes. In this blog post, we highlight the key features and improvements that Ubuntu 24.04 LTS brings to the table, specifically tailored for users of Microsoft/Azure.

Unified marketplace offering

Ubuntu 24.04 LTS introduces a consolidated Azure Marketplace experience. Easily find the official Ubuntu images created by Canonical and endorsed by Microsoft for Azure, all under a single offering: ubuntu-24_04-lts. This simplification aids your search and selection process, helping you choose the right image for your needs and ensuring optimal compatibility and performance. [Explore the Ubuntu 24.04 images on the Azure Marketplace].

Optimized for Azure

Ubuntu 24.04 LTS is finely tuned to enhance performance on Azure infrastructure, ensuring that the Ubuntu images are fully compatible and support the latest cloud features as they are released. This optimization boosts system efficiency, speed, and reliability. Integration with Azure Guest Patching and the Update Management Center facilitates streamlined and continuous system updates, thereby reinforcing the overall security and stability of Ubuntu deployments.

Enhanced developer toolchains

.NET 8 is fully compatible with Ubuntu 24.04 LTS from launch, being directly available through the official Ubuntu feeds. This synchronization with the .NET release cadence ensures developers have immediate access to the latest features and updates. Additionally, .NET 8 introduces streamlined package management and new Ubuntu container images, boosting development flexibility and deployment efficiency. (Read more in this Microsoft’s blog post).

The commitment to developer productivity also extends to other popular programming languages, including TCK-certified Java versions and the latest Rust toolchains, enhancing support and smoothing the development experience.

Confidential Computing

Ubuntu continues to lead in confidential computing with support for Confidential VMs, including capabilities for confidential AI. This is facilitated by utilizing advanced hardware security extensions such as AMD’s 4th Gen EPYC processors with SEV-SNP and NVIDIA H100 Tensor Core GPUs. These features help safeguard data at runtime from system vulnerabilities and unauthorized access, making them particularly suitable for AI training and data inference involving sensitive information.

Windows Subsystem for Linux (WSL)

Ubuntu 24.04 LTS enhances its WSL integration using the same installer technology as Ubuntu Server. This update includes support for cloud-init, standardizing developer environments across installations and ensuring consistent and streamlined workflows.

Wrapping up

As we explore the capabilities of Ubuntu 24.04 LTS, Microsoft/Azure users will experience an integration that is tailored to current technological needs and equipped for upcoming developments. This version is supported for up to 12 years, providing a stable and reliable foundation that enterprises and developers can rely on for long-term projects and innovation.

Kubernetes backups just got easier with the CloudCasa charm from Catalogic

Kubernetes is very straightforward for deploying and managing stateless applications. Consequently, proper backups of the Kubernetes environment have often been postponed or neglected. Today, however, more and more applications running on Kubernetes are stateful, making backup and recovery solutions for these systems critical. Neglecting backup is risky: as applications become more complex and need to preserve their states, the challenges of recovering a setup increase. Therefore, platform engineers need a comprehensive backup solution for Kubernetes.

CloudCasa is a SaaS offering that provides data protection, recovery, and migration for Kubernetes. It provides a best-in-class solution for multi-cluster, multi-cloud and hybrid-cloud Kubernetes environments.


Caption: The CloudCasa dashboard, where it all comes together

The CloudCasa service comes in two variants: The CloudCasa Pro service provides centralised backup services for large, complex, multi-cluster, multi-cloud, and hybrid cloud environments. The recently introduced CloudCasa for Velero service brings similar enterprise features to existing installations of Velero, the popular open source Kubernetes backup solution. These features include centralised management and monitoring, alerts, job history, guided recovery, RBAC, and commercial support. Taking advantage of these leading-edge Kubernetes backup capabilities is now easier than ever, as Catalogic has released a CloudCasa charm.

Why use the CloudCasa charm?

The charm for CloudCasa is a Kubernetes operator created with the Charm SDK for Juju. Juju is an open source orchestration engine for software operators that enables the deployment, integration and lifecycle management of applications at any scale, on any infrastructure. An application can be a database, a web server, a dashboard or, in this case, backup software.

As a SaaS offering, an important part of CloudCasa’s value proposition is easy set-up and configuration, and minimal time to first backup. The only software component that customers need to install is the CloudCasa agent, so the engineering team is always looking for ways to make installation and upgrade of the agent easier for customers. They also wanted a native integration for Canonical’s Kubernetes platform. Juju was the perfect fit, and the charm makes consuming CloudCasa seamless for users.

“We’re excited that, working with our partners at Canonical, we were able to quickly implement the CloudCasa charm, providing convenient installation and management of our agent and consistent operations for customers using the Juju framework,” said Bob Adair, Head of Product Management at CloudCasa. ”As Kubernetes becomes more prevalent in the enterprise and the applications running on it become stateful and more complex, DevOps, SRE, and platform engineering teams need to think seriously about adequately protecting and managing these critical environments. This is where both CloudCasa and charms come in.”

How to install CloudCasa using the Charm

The CloudCasa charm can be installed directly from Charmhub using just a few simple Juju commands. The charm is available on charmhub.io. To get started with an existing cluster and an installation of Juju execute:

$ juju deploy cloudcasa

Then as the bare minimum of configurations, you set the cluster id:

$ juju config cloudcasa clusterid=<clusterid>

The CloudCasa application provides a way to trigger the installation of the charm. Users can access the Add and Edit Cluster wizards in the CloudCasa UI directly. 

Caption: Select the charm when configuring a K8s cluster in the Add Cluster wizard

About CloudCasa by Catalogic

CloudCasa by Catalogic is a Kubernetes backup-as-a-service providing innovative multi-cloud data protection, migration, and disaster recovery for Kubernetes applications and cloud data services. CloudCasa enables multi-cluster and multi-cloud application resiliency and mobility with granular or cluster-level recovery, across accounts, regions and even across clouds. CloudCasa is fully compatible and complementary to Velero, the open-source Kubernetes backup tool that has been downloaded over 100 million times. Signup for CloudCasa for Velero.

Read more

More questions about operators, Juju and charms?

Contact Canonical

(Photo by Taylor Vick on Unsplash)

Canonical releases Ubuntu 24.04 LTS Noble Numbat

Canonical’s 10th Long Term Supported release sets a new standard in performance engineering, enterprise security and developer experience.

London, 25 April 2024.

Today Canonical announced the release of Ubuntu 24.04 LTS, codenamed “Noble Numbat”, available to download and install from https://ubuntu.com/download.

Ubuntu 24.04 LTS builds on the advancements of the last three interim releases as well as the contributions of open source developers from around the world to ensure a secure, optimised and forward looking platform.

“Ubuntu 24.04 LTS takes a bold step into performance engineering and confidential computing to deliver an enterprise-grade innovation platform, supported for at least 12 years”, said Mark Shuttleworth, CEO of Canonical. “For developers we are delighted to announce TCK certified Java, an LTS for .NET and the latest Rust toolchain.”

Performance engineering tools pre-enabled and pre-loaded

Canonical is dedicated to raising the bar for quality and performance across the entire Ubuntu ecosystem. 

Ubuntu 24.04 LTS delivers the latest Linux 6.8 kernel with improved syscall performance, nested KVM support on ppc64el, and access to the newly landed bcachefs filesystem. In addition to upstream improvements, Ubuntu 24.04 LTS has merged low-latency kernel features into the default kernel, reducing kernel task scheduling delays.

Ubuntu 24.04 LTS also enables frame pointers by default on all 64-bit architectures so that performance engineers have ready access to accurate and complete flame graphs as they profile their systems for troubleshooting and optimisation.

“Frame pointers allow more complete CPU profiling and off-CPU profiling. The performance wins that these can provide far outweigh the comparatively tiny loss in performance. Ubuntu enabling frame pointers by default will be a huge win for performance engineering and the default developer experience”, said Brendan Gregg, Computer Performance Expert and Fellow at Intel. Tracing with bpftrace is now standard in Ubuntu 24.04 LTS, alongside pre-existing profiling tools to provide site reliability engineers with immediate access to essential resources.

Integrated workload accelerators bring additional performance improvements. Canonical and Intel worked together to integrate Intel® QuickAssist Technology (Intel® QAT) for the first time ever in an LTS. Intel QAT enables users to accelerate encryption and compression in order to reduce CPU utilisation and improve networking and storage application performance on 4th Gen and newer Intel Xeon Scalable processors. 

“Ubuntu is a natural fit to enable the most advanced Intel features. Canonical and Intel have a shared philosophy of enabling performance and security at scale across platforms”, said Mark Skarpness, Vice President and General Manager of System Software Engineering at Intel.

Increased developer productivity with LTS toolchains

Ubuntu 24.04 LTS includes Python 3.12, Ruby 3.2, PHP 8.3 and Go 1.22 with additional focus dedicated to the developer experience for .NET, Java and Rust.

With the introduction of .NET 8, Ubuntu is taking a significant step forward in supporting the .NET community. NET 8 will be fully supported on Ubuntu 24.04 LTS and 22.04 LTS for the entire lifecycle of both releases, enabling developers to upgrade their applications to newer .NET versions prior to upgrading their Ubuntu release. This .NET support has also been extended to the IBM System Z platform.

“We are pleased about the release of Canonical Ubuntu 24.04 LTS and the increased performance, developer productivity, and security that it provides our joint customers,” said Jeremy Winter, Corporate Vice President, Azure Cloud Native. “Ubuntu is an endorsed Linux distro on Microsoft Azure, and an important component for many of Microsoft’s technologies, including .NET, Windows Subsystem for Linux, Azure Kubernetes Service, and Azure confidential computing. Microsoft and Canonical have a close engineering relationship spanning everything from update infrastructure in Azure to developer tooling, notably .NET 8 which is part of the Noble Numbat release from day one. We look forward to continuing our strong collaboration with Canonical to enhance developer productivity and provide a robust experience for Ubuntu on Azure.”

For Java developers, OpenJDK 21 is the default in Ubuntu 24.04 LTS while maintaining support for versions 17, 11, and 8. OpenJDK 17 and 21 are also TCK certified, which means they  adhere to Java standards and ensure interoperability with other Java platforms. A special FIPS-compliant OpenJDK 11 package is also available for Ubuntu Pro users.

Ubuntu 24.04 LTS ships with Rust 1.75 and a simpler Rust toolchain snap framework. This will support the increasing use of Rust in key Ubuntu packages, like the kernel and Firefox, and enables future Rust versions to be delivered to developers on 24.04 LTS in years to come.

New management tools for Ubuntu Desktop and WSL

For the first time in an LTS, Ubuntu Desktop now uses the same installer technology as Ubuntu Server. This means that desktop administrators can now use image customisation tools like autoinstall and cloud-init to create tailored experiences for their developers. The user interface has also received a makeover, with a modern design built in Flutter.

For those managing mixed Windows and Ubuntu environments, the Active Directory Group Policy client available via Ubuntu Pro now supports enterprise proxy configuration, privilege management and remote script execution.

Canonical continues to invest in Ubuntu on Windows Subsystem for Linux (WSL) as a first class platform for developers and data scientists. Starting with Ubuntu 24.04 LTS, Ubuntu on WSL now supports cloud-init to enable image customisation and standardisation across developer estates.

Confidential computing on the cloud and private data centres

Confidential computing secures data at runtime from vulnerabilities within the host privileged system software, including the hypervisor. It also protects data against unauthorised access by 

infrastructure administrators. Today, Ubuntu offers the most extensive portfolio of confidential virtual machines, available across Microsoft Azure, Google Cloud, and Amazon Web Services.

Ubuntu is also the first and only Linux distribution to support confidential GPUs on the public cloud, starting with a preview on Microsoft Azure. Building on the silicon innovation of NVIDIA H100 Tensor Core GPUs and AMD 4th Gen EPYC processors with SEV-SNP, Ubuntu  confidential VMs are ideal to perform AI training and inference tasks on sensitive data.

Ubuntu also supports confidential computing in private data centres. Thanks to a strategic collaboration between Intel and Canonical, Ubuntu now seamlessly supports Intel® Trust Domain Extensions (Intel® TDX) on both the host and guest sides, starting with an Intel-optimised Ubuntu 23.10 build.  With no changes required to the application layer, VM isolation with Intel TDX greatly simplifies the porting and migration of existing workloads to a confidential computing environment.

12 years of support with new Ubuntu Pro add-on 

To meet the needs of Canonical’s enterprise customers, Ubuntu 24.04 LTS gets a 12 year commitment for security maintenance and support. As with other long term supported releases, Noble Numbat will get five years of free security maintenance on the main Ubuntu repository. Ubuntu Pro extends that commitment to 10 years on both the main and universe repositories. Ubuntu Pro subscribers can purchase an extra two years with the Legacy Support add-on

The 12 year commitment also applies to earlier Ubuntu releases, starting with 14.04 LTS. The LTS expansion offers benefits for individuals and organisations who want to gain even more stability while building on top of Ubuntu’s wide array of open source software libraries. 

Next steps

About Canonical 

Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. 

Learn more at https://canonical.com/

Ubuntu Desktop 24.04 LTS: Noble Numbat deep dive

20 years in the making. Ubuntu 24.04 LTS brings together the latest advancements from the Linux ecosystem into a release that is built to empower open source developers and deliver innovation for the next 12 years.

The road to Noble Numbat has proven to be an exciting journey through successively ambitious interim releases, experimenting with new approaches to security (and tackling last minute CVEs), evolving our core desktop apps, and continuing our commitment to performance and compatibility across a wide array of hardware supported by the brand new Linux 6.8 kernel.

Whilst each LTS is a significant milestone, it’s never the final destination. We look forward to extending and expanding on what we’ve delivered today both within the lifecycle of Ubuntu 24.04 LTS and in future releases, always considering how we can live up to our mission, and the values of Ubuntu Desktop.

Let’s get into the details.

Rethinking provisioning

Addressing the fundamental issue of “how do I get Ubuntu on this machine?” is still one of our biggest priorities. Whilst today Ubuntu ships pre-installed on millions of desktops, laptops and workstations around the world thanks to our partnerships with OEMs like Dell, HP and Lenovo, more than ten times as many users install the operating system themselves each year. Here’s what we’re adding to simplify Ubuntu installations.

Unifying the stack

Over the last few interim releases we have aligned the underlying tech stack of the desktop installer to use the same Subiquity back end as Ubuntu server, creating a consistent codebase across both platforms to deliver feature parity and easier maintainability. This is complemented by a brand new front end built in Flutter which has been iterated on significantly over the past year to improve access to accessibility options, increase clarity on the user experience and deliver a polished and improved experience.

Additional encryption options

As part of this migration we’ve brought ZFS guided install back as a filesystem option and added support for ZFS encryption. We’ve also added improved guidance for dual-boot setups, particularly in relation to BitLocker. One major request from users has been support for hardware-backed full disk encryption and it makes its first appearance in an experimental form in Ubuntu 24.04 LTS. This implementation has certain limitations at launch which restrict its use to those devices that only require a generic kernel with no third party drivers or kernel modules, and does not currently support firmware upgrades. We intend to extend the hardware compatibility of this feature over time within the lifecycle of this release, with support for NVIDIA drivers as our first priority.

Integrated autoinstall

One of the most exciting new additions is the surfacing of autoinstall support in the graphical installer. Users or enterprises who want to create a customised, repeatable, automated installation flow can now provide the address of a local or remote autoinstall.yaml file and let Subiquity take over from there.

Check out this getting started tutorial to see how easy it is to automate user-creation, install additional apps and configure your filesystem in a format you can use across multiple machines.

This brings us a number of steps closer to the long term goal of zero touch provisioning, and we plan to add additional support for SSO authentication to access protected autoinstall files in a corporate environment at a later date.

New core apps

The new features don’t stop once you’ve installed Ubuntu Desktop. The new App Center (also flutter-based) is another notable highlight, bringing a modern, more performant new look to app discovery with clearer categories and application management functionality. Since its initial launch, the App Center now includes a new ratings service to allow users to vote on the quality of their apps and view an aggregated score from other users. These scores, combined with the other rich meta-data available from the Snap Store, will make it easier for us to deliver additional discovery mechanisms such as top charts, most popular or recently updated.

While the App Center defaults to a snap-centric view by default to enable us to deliver these usability features, you can still use it to find and install deb packages via the search toggles.

As part of the new App Center development we’ve split out firmware updates into their own dedicated app. This not only allows a richer experience managing firmware but also improves performance since the old Ubuntu Software application would need to remain permanently running in the background to check for new firmware on previous releases.

GNEW GNOME

Ubuntu Desktop 24.04 LTS continues our commitment to shipping the latest and greatest GNOME with version 46. This release delivers a host of performance and usability improvements including file manager search and performance, expandable notifications and consolidated settings options for easier access.

As usual, Ubuntu builds on the excellent foundation provided by GNOME with a number of extensions and additions. The colour picker allows users to tailor their desktop highlights to their taste, triple buffering improves performance on Intel and Raspberry Pi graphics drivers and the addition of the Tiling Assistant extension enables quarter screen tiling support for better workspace management.

Consistent networking across desktop and server with Netplan 1.0

In Ubuntu 23.10 we included Netplan as the default tool to configure networking on desktop, unifying the stack across server and cloud where Netplan has been the default since 2016. This change enables administrators to consistently configure their Ubuntu estate regardless of platform. With the recent release of Netplan 1.0, all platforms also benefit from new features around wireless compatibility and usability improvements such as netplan status –diff.

It is important to note that Netplan does not replace NetworkManager and will not impact workflows that prefer the previous configuration methods. NetworkManager has bidirectional integration with Netplan, meaning changes made in either configuration are updated and reflected in both.

You can read more about this bidirectionality in Lukas’ previous blog. To find out what’s new in Netplan 1.0, check out his recent announcement.

Comprehensive GPO support with Active Directory

Ubuntu Desktop is highly prevalent in enterprise engineering and data science teams in enterprise, academic and federal institutions around the globe, whilst Windows remains the corporate OS of choice for other departments. Canonical’s Landscape is highly effective at monitoring, managing and reporting on the compliance of Ubuntu instances across desktop, server and cloud, however desktop IT administrators are often looking for solutions that help them manage mixed Ubuntu and Windows devices.

On-premise Active Directory has been the preferred management tool for Windows administrators for many years, and still represents the majority share of organisations. User authentication with Active Directory on Linux has been a standard for some time as part of the System Services Security Daemon (SSSD), however in Ubuntu 22.04 LTS we introduced additional support for Group Policy Objects (GPOs) allowing further compliance configuration. Over the course of our interim releases this GPO support has been expanded to cover the majority device and user policies requested by Active Directory administrators, including:

  • Privilege management and removal of local admins
  • Remote scripts execution
  • Managing apparmor profiles
  • Configuring network shares
  • Configuring proxy settings
  • Certificate autoenrollment

In addition to the pre-existing policies available on Ubuntu 22.04 LTS. This delivers a best in class solution for administrators looking to empower their developers with Ubuntu Desktop.

Going forward, our attention is now turning to support third party cloud-based identity providers following a proof of concept implementation of Azure Active Directory enrollment in Ubuntu 23.04. We are currently in the process of expanding on the functionality delivered in that release as part of a new implementation and look forward to talking more about that in the near future.

Finally, for those developers who remain on Windows due to internal policy requirements, we are continuing to invest in enterprise tooling for Ubuntu on Windows Subsystem for Linux (WSL). Ubuntu 24.04 LTS supports cloud-init instance initialisation, enabling administrators to seed custom config files on their developer’s machines to create standardised Ubuntu environments. This is a more robust solution than existing import/export workflows and represents the first step toward future management and compliance tooling.

Secure software management in Ubuntu Desktop 24.04 LTS

Underneath the hood, Ubuntu 24.04 LTS also includes a number of security improvements for those developing and distributing software within the Ubuntu ecosystem. In Ubuntu 23.10 we landed a new version of software-properties that changed the way Personal Package Archives (PPAs) are managed on Ubuntu. 

PPAs are a critical tool for development, testing and customisation, enabling users to install software outside of the official Ubuntu archives. This allows for a great deal of software freedom but also comes with potential security risks due to the access they are granted to your OS. In Ubuntu 24.04 LTS, PPAs are now distributed as deb822-formatted.sources files with their signing key directly embedded into the file’s signed-by field. This establishes a 1:1 relationship between the key and the repository, meaning one key cannot be used to sign multiple repositories and removing a repository also removes its associated key. In addition, APT now requires repositories to be signed using stronger public key algorithms.

Unprivileged user namespace restrictions

Another significant security enhancement is the restriction of unprivileged user namespaces. These are a widely used feature of the Linux kernel that provide additional security isolation for applications that construct their own sandboxes, such as browsers which would then use that space to execute untrusted web content. So far so good, however the ability to create unprivileged user namespaces can expose additional attack surfaces within the Linux kernel and has proven to be a step in a significant number of exploits. In Ubuntu 24.04 LTS, AppAmor is now used to selectively control access to unprivileged user namespaces on a per application basis so that only applications with legitimate need can leverage this functionality.

You can read more about this change as well as a range of other security enhancements to the latest Ubuntu release in the security team’s deep dive.

Improved proposed pocket

The proposed pocket is used as a staging area for software updates prior to their release to the wider Ubuntu user base. In the past this pocket has been an all-or-nothing experience, with users who opt in to updates from proposed needing to take all updates that were available. As a result the chance of introducing system instability was significantly increased, disincentivising those who wanted to provide testing support for specific features in advance of their wider availability.

In Ubuntu 24.04 LTS we have lowered the default apt priority of updates in “proposed” to allow users to specify exactly which packages they want to install and which they want to remain stable. This change is designed to increase the confidence of users who want to test specific features ahead of their general release.

Building the future, together

This brings us to the end of this deep dive into the motivations and decisions behind just some of the features of the latest Long Term Supported release of Ubuntu Desktop. It has been a challenging and exciting experience to see each of these building blocks come together over the last three interim releases. With Ubuntu Desktop 24.04 LTS our goal has been to build a platform ready to stand the test of time, and the foundation for your next, great open source project.

As always, the story continues. Thank you for joining us.

Get started today

What’s new in security for Ubuntu 24.04 LTS?

Photo by Gabriel Heinzer on Unsplash

We’re excited about the upcoming Ubuntu 24.04 LTS release, Noble Numbat. Like all Ubuntu releases, Ubuntu 24.04 LTS comes with 5 years of free security maintenance for the main repository. Support can be expanded for an extra 5 years, and to include the universe repository, via Ubuntu Pro.  Organisations looking to keep their systems secure without needing a major upgrade can also get the Legacy Support add-on to expand that support beyond the 10 years. Combined with the enhanced security coverage provided by Ubuntu Pro and Legacy Support, Ubuntu 24.04 LTS provides a secure foundation on which to develop and deploy your applications and services in an increasingly risky environment. In this blog post, we will look at some of the enhancements and security features included in Noble Numbat, building on those available in Ubuntu 22.04 LTS.

Unprivileged user namespace restrictions

Unprivileged user namespaces are a widely used feature of the Linux kernel, providing additional security isolation for applications, and are often employed as part of a sandbox environment. They allow an application to gain additional permissions within a constrained environment, so that a more trusted part of an application can then use these additional permissions to create a more constrained sandbox environment within which less trusted parts can then be executed. A common use case is the sandboxing employed by modern web browsers, where the (trusted) application itself sets up the sandbox where it executes the untrusted web content. However, by providing these additional permissions, unprivileged user namespaces also expose additional attack surfaces within the Linux kernel. There has been a long history of (ab)use of unprivileged user namespaces to exploit various kernel vulnerabilities. The most recent interim release of Ubuntu, 23.10, introduced the ability to restrict the use of unprivileged user namespaces to only those applications which legitimately require such access. In Ubuntu 24.04 LTS, this feature has both been improved to cover additional applications both within Ubuntu and from third parties, and to allow better default semantics of the feature. For Ubuntu 24.04 LTS, the use of unprivileged user namespaces is then allowed for all applications but access to any additional permissions within the namespace are denied. This allows more applications to more better gracefully handle this default restriction whilst still protecting against the abuse of user namespaces to gain access to additional attack surfaces within the Linux kernel.

Binary hardening

Modern toolchains and compilers have gained many enhancements to be able to create binaries that include various defensive mechanisms. These include the ability to detect and avoid various possible buffer overflow conditions as well as the ability to take advantage of modern processor features like branch protection for additional defence against code reuse attacks.

The GNU C library, used as the cornerstone of many applications on Ubuntu, provides runtime detection of, and protection against, certain types of buffer overflow cases, as well as certain dangerous string handling operations via the use of the _FORTIFY_SOURCE macro. FORTIFY_SOURCE can be specified at various levels providing increasing security features, ranging from 0 to 3. Modern Ubuntu releases have all used FORTIFY_SOURCE=2 which provided a solid foundation by including checks on string handling functions like sprintf(), strcpy() and others to detect possible buffer overflows, as well as format-string vulnerabilities via the %n format specifier in various cases. Ubuntu 24.04 LTS enables additional security features by increasing this to FORTIFY_SOURCE=3. Level three greatly enhances the detection of possible dangerous use of a number of other common memory management functions including memmove(),  memcpy(), snprintf(), vsnprintf(), strtok() and strncat(). This feature is enabled by default in the gcc compiler within Ubuntu 24.04 LTS, so that all packages in the Ubuntu archive which are compiled with gcc, or any applications compiled with gcc on Ubuntu 24.04 LTS also receive this additional protection.

The Armv8-M hardware architecture (provided by the “arm64” software architecture on Ubuntu) provides hardware-enforced pointer authentication and branch target identification. Pointer authentication provides the ability to detect malicious stack buffer modifications which aim to redirect pointers stored on the stack to attacker controlled locations, whilst branch target identification is used to track certain indirect branch instructions and the possible locations which they can target. By tracking such valid locations, the processor can detect possible malicious jump-oriented programming attacks which aim to use existing indirect branches to jump to other gadgets within the code. The gcc compiler supports these features via the -mbranch-protection option. In Ubuntu 24.04 LTS, the dpkg package now enables -mbranch-protection=standard, so that all packages within the Ubuntu archive enable support for these hardware features where available.

AppArmor 4

The aforementioned unprivileged user namespace restrictions are all backed by the AppArmor mandatory access control system. AppArmor allows a system administrator to implement the principle of least authority by defining which resources an application should be granted access to and denying all others. AppArmor consists of a userspace package, which is used to define the security profiles for applications and the system, as well as the AppArmor Linux Security Module within the Linux kernel which provides enforcement of the policies. Ubuntu 24.04 LTS includes the latest AppArmor 4.0 release, providing support for many new features, such as specifying allowed network addresses and ports within the security policy (rather than just high level protocols) or various conditionals to allow more complex policy to be expressed. An exciting new development provided by AppArmor 4 in Ubuntu 24.04 LTS is the ability to defer access control decisions to a trusted userspace program. This allows for quite advanced decision making to be implemented, by taking into account the greater context available within userspace or to even interact with the user / system administrator in a real-time fashion. For example, the experimental snapd prompting feature takes advantage of this work to allow users to exercise direct control over which files a snap can access within their home directory. Finally, within the kernel, AppArmor has gained the ability to mediate access to user namespaces as well as the io_uring subsystem, both of which have historically provided additional kernel attack surfaces to malicious applications. 

Disabling of old TLS versions

The use of cryptography for private communications is the backbone of the modern internet. The Transport Layer Security protocol has provided confidentiality and integrity to internet communications since it was first standardised in 1999 with TLS 1.0. This protocol has undergone various revisions since that time to introduce additional security features and avoid various security issues inherent in the earlier versions of this standard. Given the wide range of TLS versions and options supported by each, modern internet systems will use a process of auto-negotiation to select an appropriate combination of protocol version and parameters when establishing a secure communications link. In Ubuntu 24.04 LTS, TLS 1.0, 1.1 and DTLS 1.0 are all forcefully disabled (for any applications that use the underlying openssl or gnutls libraries) to ensure that users are not exposed to possible TLS downgrade attacks which could expose their sensitive information.

Upstream Kernel Security Features

Linux kernel v5.15 was used as the basis for the Linux kernel in the previous Ubuntu 22.04 LTS release. This provided a number of kernel security features including core scheduling, kernel stack randomisation and unprivileged BPF restrictions to name a few. Since that time, the upstream Linux kernel community has been busy adding additional kernel security features. Ubuntu 24.04 LTS includes the v6.8 Linux kernel which provides the following additional security features:

Intel shadow stack support

Modern Intel CPUs support an additional hardware feature aimed at preventing certain types of return-oriented programming (ROP) and other attacks that target the malicious corruption of the call stack. A shadow stack is a hardware enforced copy of the stack return address that cannot be directly modified by the CPU. When the processor returns from a function call, the return address from the stack is compared against the value from the shadow stack – if the two differ, the process is terminated to prevent a possible ROP attack. Whilst compiler support for this feature has been enabled for userspace packages since Ubuntu 19.10, it has not been able to be utilised until it was also supported by the kernel and the C library. Ubuntu 24.04 LTS includes this additional support for shadow stacks to allow this feature to be enabled when desired by setting the GLIBC_TUNABLES=glibc.cpu.hwcaps=SHSTK environment variable.

Secure virtualisation with AMD SEV-SNP and Intel TDX

Confidential computing represents a fundamental departure from the traditional threat model, where vulnerabilities in the complex codebase of privileged system software like the operating system, hypervisor, and firmware pose ongoing risks to the confidentiality and integrity of both code and data. Likewise, unauthorised access by a malicious cloud administrator could jeopardise the security of your virtual machine (VM) and its environment. Building on the innovation of Trusted Execution Environments at the silicon level, Ubuntu Confidential VMs aim to restore your control over the security assurances of your VMs.

For the x86 architecture, both AMD and Intel processors provide hardware features (named AMD SEV SNP and Intel TDX respectively) to support running virtual machines with memory encryption and integrity protection. They ensure that the data contained within the virtual machine is inaccessible to the hypervisor and hence the infrastructure operator.  Support for using these features as a guest virtual machine was introduced in the upstream Linux kernel version 5.19.

Thanks to Ubuntu Confidential VMs, a user can make use of compute resources provided by a third party whilst maintaining the integrity and confidentiality of their data through the use of memory encryption and other features.  On the public cloud, Ubuntu offers the widest portfolio of confidential VMs. These build on the innovation of both the hardware features, with offerings available across Microsoft Azure, Google Cloud and Amazon AWS. 

For enterprise customers seeking to harness confidential computing within their private data centres, a fully enabled software stack is essential. This stack encompasses both the guest side (kernel and OVMF) and the host side (kernel-KVM, QEMU, and Libvirt). Currently, the host-side patches are not yet upstream. To address this, Canonical and Intel have forged a strategic collaboration to empower Ubuntu customers with an Intel-optimised TDX Ubuntu build. This offering includes all necessary guest and host patches, even those not yet merged upstream, starting with Ubuntu 23.10 and extending into 24.04 and beyond. The complete TDX software stack is accessible through this github repository. 

This collaborative effort enables our customers to promptly leverage the security assurances of Intel TDX. It also serves to narrow the gap between silicon innovation and software readiness, a gap that grows as Intel continues to push the boundaries of hardware innovation with 5th Gen Intel Xeon scalable processors and beyond.

Strict compile-time bounds checking

Similar to hardening of binaries within the libraries and applications distributed in Ubuntu, the Linux kernel itself gained enhanced support for detecting possible buffer overflows at compile time via improved bounds checking of the memcpy() family of functions. Within the kernel, the FORTIFY_SOURCE macro enables various checks in memory management functions like memcpy() and memset() by checking that the size of the destination object is large enough to hold the specified amount of memory, and if not will abort the compilation process. This helps to catch various trivial memory management issues, but previously was not able to properly handle more complex cases such as when an object was embedded within a larger object. This is quite a common pattern within the kernel, and so the changes introduced in the upstream 5.18 kernel version to enumerate and fix various such cases greatly improves this feature. Now the compiler is able to detect and enforce stricter checks when performing memory operations on sub-objects to ensure that other object members are not inadvertently overwritten, avoiding an entire class of possible buffer overflow vulnerabilities within the kernel.

Wrapping up

Overall, the vast range of security improvements that have gone into Ubuntu 24.04 LTS greatly improve on the strong foundation provided by previous Ubuntu releases, making it the most secure release to date. Additional features within both the kernel, userspace and across the distribution as a whole combine to address entire vulnerability classes and attack surfaces. With up to 12 years of support, Ubuntu 24.04 LTS provides the best and most secure foundation to develop and deploy Linux services and applications. Expanded Security Maintenance, kernel livepatching and additional services are all provided to Ubuntu Pro subscribers to enhance the security of their Ubuntu deployments.

Achieving Performant Single-Tenant Cloud Isolation with IBM Cloud Bare Metal Servers, Ubuntu Core, Snaps, and AMD Pensando Elba Data Processing Unit

Discover how IBM Cloud’s bare metal servers offer highly confined and high-performing single-tenant cloud isolation through the use of Ubuntu Core and Snaps, supported by the AMD Pensando Elba DPU (Data Processing Unit). This setup enables the creation of secure and efficient environments for each tenant. Its design ensures the total separation of their servers from the cloud underlay. The architecture delivers consistent performance and enables non intrusive control from the cloud provider. Learn how this innovative solution can benefit your business and enhance your cloud infrastructure.

Introduction

Public cloud bare-metal servers offer dedicated physical resources, but can present isolation and performance challenges. Isolation requirements involve maintaining full control of compute capabilities by the tenant, while preserving the backend management of its infrastructure by the cloud provider and preventing unauthorised access. Performance requirements entail providing consistent performance even under heavy workloads. Cloud providers face challenges in ensuring physical and logical isolation, resource allocation, monitoring, management, scalability, and security. To address these complex requirements, providers must invest in advanced technologies and implement best practices for resource allocation, monitoring, and management. They also need to regularly review and update infrastructure to meet tenant needs.

In the following discussion, we will explore how IBM Cloud is addressing these challenges by harnessing the distinctive capabilities of Ubuntu Core and Snaps deployed on the AMD Pensando Elba infrastructure accelerators.

IBM Cloud Bare Metal Servers for VPC

IBM has always been dedicated to keeping clients essential data secure through a strong focus on resilience, performance, and compliance. IBM Cloud executes that focus within highly regulated industries such as finance and insurance organisations. Given IBM Cloud’s long-standing commitment to data security, it is unsurprising and essential that Bare Metal Servers for VPC (VPC BM) implements the most rigorous security guarantees to meet customers expectations.

Bare metal servers, which are physical servers dedicated to a single tenant, offer benefits such as high performance and customizability, but managing them in a multi-tenant environment can be complex. A key requirement is ensuring isolation between the tenant and the cloud backend, both to maintain security and to prevent performance issues caused by noisy neighbours.

VPC BM allows customers to select a preset server profile that best matches their workloads to help accelerate the deployment of compute resources. Customers can achieve maximum performance without oversubscription deployed in 10 minutes 

VPC BM  is powered with the latest technology. They are built for cloud-enterprise applications, including VMware and SAP, and can also support HPC and IOT workloads. They come with enhanced high-performance networking at 100 Gbps as well as advanced security features. 

A network orchestration layer handles the networking for all bare metal servers that are within an IBM Cloud VPC across regions and zones. This allows for management and creation of multiple, virtual private clouds in multi zone regions and also improves security, reduces latency, and increases high availability.

“I selected IBM Cloud VPC because of 5 points that I thought and was proven correct based on my experience using the service. First is security. Secondly is agility. The third is isolation. Fourth is the high performance. Fifth, and last, is the scalability.”

Ivo Draginov CEO BatchService

AMD Pensando DSC2-200 “Elba”

In use with some of the largest cloud providers and Hyperscalers on the planet, the AMD
Pensando DSC2-200 has proven itself as the platform of choice for cloud providers seeking to
optimise performance, increase scale and introduce new infrastructure services at the speed of
software. The DSC2-200 is full-height, half-length PCIe card powered by AMD Pensando 2nd
generation DPU “Elba”. The DSC2-200 is the ideal platform for cloud providers to implement
multi-tenant SDN, stateful security, storage, encryption and telemetry at line rate. The platform’s
scale architecture allows cloud provider to offer multiple services on the same DPU card.

Developers can create customised data plane services that target 400G throughput,
microsecond-level latencies, and scale to tens of millions of flows. The heart of the AMD
Pensando platform is a fully programmable P4 data processing unit (DPU). High-level
programming languages (P4, C) enable rapid development and deployment of new features and
services.

The innovative design of AMD Pensando DPU provides secure air-gap between tenant’s
compute instances and cloud infrastructure as well as secure isolation between tenants. This
separation enables cloud operators to manage their infrastructure functions efficiently and
independently of their tenant’s workloads while freeing up the valuable compute resources from
the infrastructure tasks and fully dedicating them to revenue generating business applications.
The exceptional throughput and performance of the Elba DSC2-200, along with its strong
alignment with IBM’s security expectations, made it a top choice for inclusion in IBM Cloud’s
bare metal servers for VPC. This combination of features enables IBM Cloud to provide highly
secure and powerful environments for its customers.

Achieving IBM Cloud’s target outcomes with Ubuntu Core and Snaps

The first goal was to implement a secure and reliable operating system that IBM Cloud development teams could use to launch their management interface and functionality on the AMD Pensando DPU cards. Initially IBM Cloud selected Ubuntu Server as the operating system. They were familiar with it and could easily develop on top of it using the familiar Linux toolset and API.

To develop software running on the AMD Pensando DPU cards, the development kit provides a complete container-based development environment. It allows for the development of data plane, management plane, and control plane functions. To perform correctly, these containers must be allowed direct communication with the card hardware components with fine-grained isolation. Using traditional container runtimes such as Docker and Kubernetes alone cannot meet the unique requirements of this solution. Fortunately, Snap packages provide this access through secure and controlled interfaces to the operating system.

Using Snap packages, IBM Cloud developers were able to implement all the functionalities they needed in record time. This positive experience made them turn their attention to Ubuntu Core, the version of Ubuntu specifically designed for embedded systems such as AMD Pensando DPU cards. It is entirely made up of Snap packages, creating a confined, immutable and transaction-based system. Communication among containers and between containers and the operating system is locked down under full control. In addition, Ubuntu Core provides full disk encryption and secure boot, achieving additional mandatory security compliance objectives.

IBM Cloud successfully converted their bespoke AMD Pensando system image from Ubuntu Server to Ubuntu Core and, after positive results in the pre-production tests, proceeded to deploy it in production to support Bare Metal Servers on VPC.

Conclusion

In summary, Canonical’s Ubuntu Core and IBM Cloud’s components, when packaged as Snaps, provide a unique solution that effectively addresses the challenges faced by the company. This innovative approach has enabled IBM Cloud to enhance its offerings and deliver improved performance, security, and tenant isolation. The development of the solution completed in under a year and has been successfully operating in production since then. The implementation has been a resounding success. Ultimately addressing these challenges provided IBM Cloud with several advantages, including differentiation, cost savings, and improved efficiency.

The collaboration between IBM Cloud, Canonical, and AMD Pensando remains ongoing, with plans to expand the use of Ubuntu Core and Snaps to support other non-bare metal offerings, including Virtual Server for VPC. A key medium-term goal is to achieve FedRAMP compliance, which involves upgrading to Ubuntu Core 22 and ensuring FIPS compliance at the kernel and filesystem levels. This ongoing partnership and development aim to enhance the security, performance, and functionality of IBM Cloud’s solutions.

Let’s talk open design

At Canonical, we’re committed to open-source principles and fostering collaboration. Over the last 20 years, Ubuntu’s brand has become a leader in open source, with an open operating system. Our community shapes Ubuntu’s journey, and we recognise room for improvement in how we collaborate, particularly in design at Canonical. Despite most of our development being open source, our design processes often lack transparency, particularly in visuals, user interaction, and research.

We are excited to announce that we kickstarted a working group within the Design team with a mission to empower external designers to contribute to open-source projects. Our focus is on building resources that bridge the gap between designers and open-source project maintainers, making it easier for designers to dive into projects and for maintainers to receive valuable design contributions and feedback.

Before we figure out how to support you, we’re checking out ongoing Open Design initiatives and understanding the needs, motivations, and interests of designers and project maintainers. We’re learning tons along the way and prioritising ideas on how to move forward!

As we kick things off, your input would be invaluable in shaping our efforts. Therefore, we are inviting open source maintainers and designers to participate in this survey. Your input will provide valuable insights and help us ensure we’re on the right track.

Cheers,

– The Open Design Working Group

DISA publishes STIG for Ubuntu 22.04 LTS

Introduction

DISA, the Defense Information Systems Agency, has published their Security Technical Implementation Guide (STIG) for Ubuntu 22.04 LTS. The STIG is free for the public to download from the DOD Cyber Exchange. Canonical has been working with DISA since we published Ubuntu 22.04 LTS to draft this STIG, and we are delighted that it is now finalised and available for everyone to use.

We are now developing the Ubuntu Security Guide profile with a target release in summer 2024.

What is a STIG?

A STIG is a set of guidelines for how to configure an application or system in order to harden it. Hardening means reducing the system’s attack surface: removing unnecessary software packages, locking down default values to the tightest possible settings and configuring the system to run only what you explicitly require. System hardening guidelines also seek to lessen collateral damage in the event of a compromise.

STIGs are intended to be applied with judgement and common sense. Each mission or deployment is going to be different: where a piece of guidance doesn’t make sense for your specific needs, you can choose your own path forward whilst keeping the overall intentions of the STIG in mind.

The STIGs have been primarily developed for use within the US Department of Defense. However, because they are based on universally-recognised security principles, they can be used by anyone who wants a robust system hardening framework. As a result, STIGs are being more widely adopted across the US government and numerous industries, such as financial services and online gaming.

When will Canonical publish a DISA-STIG USG profile?

The STIG that DISA has published is primarily composed of a manual XCCDF XML document that describes in human-readable words how to configure Ubuntu 22.04 LTS. This XML file contains nearly 200 individual pieces of guidance, which can be quite a daunting prospect to tackle from scratch. To simplify this process, Canonical produces the Ubuntu Security Guide (USG), an automation tool that handles both the checking and remediation of the STIG rules. USG is available as part of Ubuntu Pro, and can be enabled through the Pro client.

Our engineering team is currently working through the XCCDF document and codifying the rules into a new profile for USG. We will publish the STIG profile for USG in the coming months, with a target release in summer 2024, and will make an announcement at that time.

Conclusion

The STIG for Ubuntu 22.04 LTS will allow any users or administrators to harden their systems in accordance with this rigorous standard. Doing this by hand is a time-consuming proposition, so we recommend waiting until automated tooling is available to speed up the hardening and auditing process; the USG profile is in active development and will be published as soon as it’s ready.

Further resources

What is MLflow?

MLflow is an open source platform, used for managing machine learning workflows. It was launched back in 2018 and has grown in popularity ever since, reaching 10 million users in November 2022. AI enthusiasts and professionals have struggled with experiment tracking, model management and code reproducibility, so when MLflow was launched, it addressed pressing problems in the market. MLflow is lightweight and able to run on an average-priced machine. But it also integrates with more complex tools, so it’s ideal to run AI at scale.

A short history

Since MLflow was first released in June 2018,  the community behind it has run a recurring survey to better understand user needs and ensure the roadmap s address real-life challenges.  About a year after the launch, MLflow 1.0 was released, introducing features such as improved metric visualisations, metric X coordinates, improved search functionality and HDFS support. Additionally, it offered Python, Java, R, and REST API stability.

MLflow 2.0 landed in November 2022, when the product also celebrated 10 million users. This version incorporates extensive community feedback to simplify data science workflows and deliver innovative, first-class tools for MLOps. Features and improvements include extensions to MLflow Recipes (formerly MLflow Pipelines) such as AutoML, hyperparameter tuning, and classification support, as well as improved integrations with the ML ecosystem, a revamped MLflow Tracking UI, a refresh of core APIs across MLflow’s platform components, and much more.

In September 2023, Canonical released Charmed MLflow, a distribution of the upstream project.

Why use MLflow?

MLflow is often considered the most popular ML platform. It enables users to perform different activities, including:

  • Reproducing results: ML projects usually start with simplistic plans and tend to go overboard, resulting in an overwhelming quantity of experiments. Manual or non-automated tracking implies a high chance of missing out on finer details. ML pipelines are fragile, and even a single missing element can throw off the results. The inability to reproduce results and codes is one of the top challenges for ML teams.
  • Easy to get started: MLflow can be easily deployed and does not require heavy hardware to run. It is suitable for beginners who are looking for a solution to better see and manage their models. For example, this video shows how Charmed MLflow can be installed in less than 5 minutes.
  • Environment agnostic: The flexibility of MLflow across libraries and languages is possible because it can be accessed through a REST API and Command Line Interface (CLI). Python, R, and Java APIs are also available for convenience.
  • Integrations: While MLflow is popular in itself, it does not work in a silo. It integrates seamlessly with leading open source tools and frameworks such as Spark, Kubeflow, PyTorch or TensorFlow.
  • Works anywhere: MLflow runs on any environment, including hybrid or multi-cloud scenarios, and on any Kubernetes.

MLflow components

MLFlow is an end-to-end platform for managing the machine learning lifecycle. It has four primary components:

MLflow Tracking

MLflow Tracking enables you to track experiments, with the primary goal of comparing results and the parameters used. It is crucial when it comes to measuring performance, as well as reproducing results. Tracked parameters include metrics, hyperparameters, features and other artefacts that can be stored on local systems or remote servers. 

MLflow Models

MLflow Models provide professionals with different formats for packaging their models. This gives flexibility in where models can be used, as well as the format in which they will be consumed. It encourages portability across platforms and simplifies the management of the machine learning models. 

MLflow projects

Machine learning projects are packaged using MLflow Projects. It ensures reusability, reproducibility and portability. A project is a directory that is used to give structure to the ML initiative. It contains the descriptor file used to define the project structure and all its dependencies. The more complex a project is, the more dependencies it has. They come with risks when it comes to version compatibility as well as upgrades.

MLflow project is useful especially when running ML at scale, where there are larger teams and multiple models being built at the same time. It enables collaboration between team members who are looking to jointly work on a project or transfer knowledge between them or to production environments.

MLflow model registry

Model Registry enables you to have a centralised place where ML models are stored. It helps with simplifying model management throughout the full lifecycle and how it transitions between different stages. It includes capabilities such as versioning and annotating, and provides APIs and a UI.

Key concepts of MLflow

MLflow is built around two key concepts: runs and experiments. 

  • In MLflow, each execution of your ML model code is referred to as a run. All runs are associated with an experiment. 

An MLflow experiment is the primary unit for MLflow runs. It influences how runs are organised, accessed and maintained. An experiment has multiple runs, and it enables you to efficiently go through those runs and perform activities such as visualisation, search and comparisons. In addition, experiments let you run artefacts and metadata for analysis in other tools.

Kubeflow vs MLflow

Both Kubeflow and MLFlow are open source solutions designed for the machine learning landscape. They received massive support from industry leaders, and are driven by a thriving community whose contributions are making a difference in the development of the projects.  The main purpose of both Kubeflow and MLFlow is to create a collaborative environment for data scientists and machine learning engineers, and enable teams to develop and deploy machine learning models in a scalable, portable and reproducible manner.

However, comparing Kubeflow and MLflow is like comparing apples to oranges. From the very beginning, they were designed for different purposes. The projects evolved over time and now have overlapping features. But most importantly,  they have different strengths. On the one hand, Kubeflow is proficient when it comes to machine learning workflow automation, using pipelines, as well as model development. On the other hand, MLFlow is great for experiment tracking and model registry. From a user perspective, MLFlow requires fewer resources and is easier to deploy and use by beginners, whereas Kubeflow is a heavier solution, ideal for scaling up machine learning projects.

Read more about Kubefllow vs. MLflow

Go to the blog

Charmed MLflow vs the upstream project

Charmed MLflow is Canonical’s distribution of the upstream project. It is part of Canonical’s growing MLOps portfolio. It has all the features of the upstream project, to which we add enterprise-grade capabilities such as:

  • Simplified deployment: the time to deployment is less than 5 minutes, enabling users to also upgrade their tools seamlessly.
  • Simplified upgrades using our guides.
  • Automated security scanning: The bundle is scanned at a regular cadence..
  • Security patching: Charmed MLflow follows Canonical’s process and procedure for security patching. Vulnerabilities are prioritised based on severity, the presence of patches in the upstream project, and the risk of exploitation.
  • Maintained images: All Charmed MLflow images are actively maintained.
  • Comprehensive testing: Charmed MLflow is thoroughly tested on multiple platforms, including public cloud, local workstations, on-premises deployments, and various CNCF-compliant Kubernetes distributions.

Further reading

Managed Cloud Services: when outsourcing your operations is the most cost-effective choice

Clouds, be they private or public, surprisingly remain one of the most DIY-favouring markets. Perhaps due to the nebulous and increasingly powerful technologies, a series of myths, or even unnecessary egos, the majority of non-tech-centric enterprises (meaning, companies whose primary business scope rests outside the realm of IT software and hardware) still try to build and nurture in-house cloud management teams, without considering outsourcing even part of their workload. Self-management has its advantages, however, thinking it’s the only option is a mistake. Reading this you may think: “managed cloud services are for lazy people, I can do it myself.” And the truth is, you indeed can. But should you? 

Cloud operations 

Let’s be honest: building a cloud is no easy feat. It is not for beginners, and involves a large series of considerations: is it large enough? Secure enough? Efficient enough? Does it justify the cost? So having made your way through this maze of questions and having finally concluded that you want to move towards a cloud deployment, the last thing you need is another set of considerations for operating it. 

Operations can be a vague term. In the tech/cloud field, it defines the entire range of actions and activities required to keep any cloud infrastructure running consistently, reliably, and efficiently. Briefly, good operations make sure your cloud does what it’s supposed to do most of the time and does not significantly disrupt your business processes when errors happen. While different from cloud to cloud, most operations can be classified into three categories: 

  • Monitoring – constant measurements of key metrics against a predefined schema to ensure functionality
  • Management – tweaks and changes to the infrastructure, such as upgrades, patches, and scaling, to ensure reliability
  • Troubleshooting – a system of protocols and procedures that keeps your workloads safe and ensures minimum data loss when incidents happen

This may sound complicated and complex, and in many ways it is. As an industry rule of thumb, for every 100 nodes of any cloud’s deployment, you will require at least one expert to ensure that proper operations are in place. This is very important because improper operations can cause significant disruption to your entire business, from inaccurate data and processes to major errors in processes and performance. Briefly put, cloud operations cannot be neglected.

The cost of self-managed clouds 

Regardless of how big or small or simple or complex your infrastructure is, there is a range of costs that you are likely to incur when it comes to operating your cloud. These can be: 

  • Direct – These are costs directly associated with the deployment and operation of your cloud, such as hardware purchases and maintenance, software licences, service subscriptions and more. They are relatively predictable and will allow you to budget quite easily ahead of time, but do allow a margin of +/- 10% when estimating, as the integration of components within the wider infrastructure can sometimes incur additional service costs. 
  • Indirect – When it comes to indirect costs, the definition’s boundaries become more blurry. In general, an indirect cost is any cost that, when neglected or denied, significantly reduces the reliability, efficiency, or even mere availability of your cloud. For example, IT headcount is a significant indirect cost: it will cost you money to hire, train, retain, and grow a team of experts to manage your infrastructure, and these costs will only be augmented by the ongoing skill gap the market is currently experiencing. The opportunity costs of having people work on operations rather than innovation can range from negligible to severe, as time-to-market is an essential component for maintaining a competitive edge in any industry. 

Indirect costs are highly unpredictable and involve a significant level of corporate responsibility should you choose to do everything yourself. Suppose you’ve hired your team and trained them: at any point, engineers can leave, or require additional training; sometimes their talent will be needed to sustain other technical feats within your business; and sometimes things can simply go slower than expected. It’s not impossible to navigate these indirect costs. Just note that while this has some advantages – like full independence and more freedom to allocate resources – it has increased risks of financial losses and slower time to market.”

In light of these costs, a general observation (or unwritten market consensus) is that tech-centric companies will likely be able to self-manage their clouds successfully. Non-tech-centric companies are likely to encounter a point where managed cloud services would present a more feasible and competitive opportunity. 

When to opt for managed cloud services: 

Before discussing when to opt for managed cloud services, let’s take a moment to clarify what they entail. Opting for Managed Cloud Services involves outsourcing your cloud infrastructure operations to an external expert, also known as a Managed Service Provider (MSP). You’ll ideally be able to relinquish all your operational concerns (along with responsibility for the efficiency of your operations) to the MSP, and focus on innovation or whatever else really matters for you. 

There is a pervasive myth that managed cloud services are only a useful option when your company finds itself unable to manage anything by itself, or when you simply don’t have an IT team. Nothing could be further from the truth. There are several situations where choosing a managed cloud service provider can prove both helpful and lucrative: 

Vertical growth – When you want to expand into a new territory, it is unlikely that you will have access to a well-established senior expertise within your IT team. This in turn can be expensive to acquire, and will need plenty of time to adjust to your company’s values and processes. Choosing an MSP to support you and enable you to grow vertically as soon as you want can help you accelerate your time to market and cut talent acquisition costs. 

Re-focus – You probably already have an IT team, and you are probably very happy with it. But when it comes to their bandwidth, you may want to have them focus on sustaining technological innovation for your competitive advantage, rather than spending most of their time keeping the lights on in your cloud infrastructure. A managed cloud service will help offer your team enough headspace to concentrate on your primary business scope. 

Cost predictability – Faced with a new project, it would be wise and appropriate to estimate your costs. But cloud infrastructure, as mentioned above, can incur a lot of unexpected costs, especially when it comes to covering a skill gap and mitigating for lost opportunities. A managed service provider should offer a stable and predictable price (usually per node per year), which can give you full control over your budgets and allow you to allocate resources more efficiently. 

You can find more information on general managed IT services on our managed services webpage.

Conclusion

When venturing into unfamiliar territory, opting for managed services is advisable – especially for non-tech-centric enterprises. Cloud infrastructure operations is a perfect example of such a case: a highly complex and resource-intensive set of processes that is essential to your business success, but detrimental to your costs if improperly self-managed. For any non-tech-centric enterprise looking to enter, expand, or upgrade their open-source cloud infrastructure, Managed Cloud Services are an attractive opportunity that proposes countless advantages and can help you retain (or even augment) your competitive edge. 

Canonical offers a wide range of managed cloud services, and we invite you to explore them by visiting our website and getting in touch

Canonical presence at Qualcomm DX Summit @Hannover Messe

At the world’s leading industrial trade fair, companies from the mechanical engineering, electrical engineering and digital industries as well as the energy sector will come together to present solutions for a high-performance, but also sustainable industry at Hannover Messe. This year, Qualcomm brought its DX Summit to Hannover Messe, putting together business and technology leaders to discuss digital transformation solutions and experiences that are moving enterprise forward today, from manufacturing to logistics, transportation, energy and more.

Canonical will join the Qualcomm DX Summit at Hannover Messe on April 23rd , 2024, where industry experts will delve into the cutting-edge technologies that are driving Industry 4.0 forward.  We’re looking forward to meeting our partners and customers on-site to discuss the latest in open-source innovation, and solutions on edge AI. Fill in the form and get a free ticket for Qualcomm DX Summit and Hannover Messe from Canonical.

Book a meeting with us

Canonical and Qualcomm collaborate to speed up Industry 4.0 adoption

Last week, Canonical and Qualcomm Technologies announced strategic collaboration to bring Ubuntu and Ubuntu Core to devices powered by Qualcomm® processors which offers an easy solution for developers to create safe, compliant, security-focused, and high-performing applications for multiple industries including industrial, robotics and edge automation.

Secure and scale your smart edge AI deployments with Ubuntu

During the event, Canonical will present a talk using a real-world case-study to showcase our joint offering with Qualcomm and illustrate how Canonical solutions benefit enterprise IoT customers to bring digital transformation and AI to their latest IoT projects. 

Presenter: Aniket Ponkshe, Director of Silicon Alliances, Canonical

Date and time: 2:20 pm – 2:40 pm, April 23rd, 2024

Location: Hall 18

Schedule a meeting with our devices experts

Book a meeting with us

Ventana and Canonical collaborate on enabling enterprise data center, high-performance and AI computing on RISC-V

This blog is co-authored by Gordan Markuš, Canonical and Kumar Sankaran, Ventana Micro Systems

Unlocking the future of semiconductor innovation 

RISC-V, an open standard instruction set architecture (ISA), is rapidly shaping the future of high-performance computing, edge computing, and artificial intelligence. The RISC-V customizable and scalable ISA enables a new era of processor innovation and efficiency. Furthermore, RISC-V democratizes innovation by allowing new companies to develop their own products on its open ISA, breaking down barriers to entry and fostering a diverse ecosystem of technological advancement. 

By fostering a more open and innovative approach to product design, the RISC-V technology vendors are not just a participant in the future of technology; they are a driving force behind the evolution of computing across multiple domains. Its impact extends from the cloud to the edge:

  • In modern data centers, enterprises seek a range of infrastructure solutions to support the breadth of modern workloads and requirements. RISC-V provides a versatile solution, offering a comprehensive suite of IP cores under a unified ISA that scales efficiently across various applications. This scalability and flexibility makes RISC-V an ideal foundation for addressing the diverse demands of today’s data center environments.
  • In HPC, its adaptability allows for the creation of specialized processors that can handle complex computations at unprecedented speeds, while also offering a quick time to market for product builders.  
  • For edge computing, RISC-V’s efficiency and the ability to tailor processors for specific tasks mean devices can process more data locally, reducing latency and the need for constant cloud connectivity. 
  • In the realm of AI, the flexibility of RISC-V paves the way for the development of highly optimized AI chips. These chips can accelerate machine learning tasks by executing AI centric computations more efficiently, thus speeding up the training and inference of AI workloads.

One of the unique products that can be designed with RISC-V ISA are chiplets. Chiplets are smaller, modular blocks of silicon that can be integrated to form a larger, more complex chip. Instead of designing a single monolithic chip, a process that is increasingly challenging and expensive at cutting-edge process nodes, manufacturers can create chiplets that specialize in different functions and combine them as needed. RISC-V and chiplet technology is empowering a new era of chip design, enabling more companies to participate in innovation and tailor their products to specific market needs with unprecedented flexibility and cost efficiency.

Ventana and Canonical partnership and technology leadership

Canonical makes open source secure, reliable and easy to use, providing support for Ubuntu and a growing portfolio of enterprise-grade open source technologies. One of the key missions of Canonical is to improve the open source experience across ISA architectures. At the end of 2023, Canonical announced joining the RISC-V Software Ecosystem (RISE) community to  support the open source community and ecosystem partners in bringing the best of Ubuntu and open source to RISC-V platforms. 

As a part of our collaboration with the ecosystem, Canonical has been working closely with Ventana Micro Systems (Ventana). Ventana is delivering a family of high-performance RISC-V data center-class CPUs delivered in the form of multi-core chiplets or core IP for high-performance applications in the cloud, enterprise data center, hyperscale, 5G, edge compute, AI/ML and automotive markets. 

The relationship between Canonical and Ventana started with a collaboration on improving the upstream software availability of RISC-V in projects such as u-boot, EDKII and the Linux kernel. 

Over time, the teams have started enabling Ubuntu on Ventana’s Veyron product family. Through the continuous efforts of this partnership Ubuntu is available on the Ventana Veyron product family and as a part of Ventana’s Veyron Software Development Kit (SDK).

Furthermore, the collaboration extends to building full solutions for the datacenter, HPC, AI/ML and Automotive, integrating Domain Specific Accelerators (DSAs) and SDKs, promising to unlock new levels of performance and efficiency for developers and enterprises alike. Some of the targeted software stacks can be seen in the figure below.  

Today, Ventana and Canonical collaborate on a myriad of topics. Together through their joint efforts across open source communities and as a part of RISC-V Software Ecosystem (RISE), Ventana and Canonical are actively contributing to the growth of the RISC-V ecosystem. We are proud of the innovation and technology leadership our partnership brings to the ecosystem. 

Enabling the ecosystem with enterprise-grade and easy to consume open source on RISC-V platforms

Ubuntu is the reference OS for innovators and developers, but also the vehicle to enable enterprises to take products to market faster. Ubuntu enables teams to focus on their core applications without worrying about the stability of the underlying frameworks. Ventana and the RISC-V ecosystem recognise the value of Ubuntu and are using it as a base platform for their innovation. 

Furthermore, the availability of Ubuntu on RISC-V platforms not only allows developers to prototype their solutions easily but provides a path to market with enterprise-grade, secure  and supported open source solutions.Whether it’s for networking offloads in the data center, training AI models in the cloud, or running AI inference at the edge, Ubuntu is an established platform of choice.

Learn more about Canonical’s engagement in the RISC-V ecosystem 

Contact Canonical to bring Ubuntu and open source software to your RISC-V platform.

Learn more about Ventana

The role of secure data storage in fueling AI innovation

There is no AI without data

Artificial intelligence is the most exciting technology revolution of recent years. Nvidia, Intel, AMD and others continue to produce faster and faster GPU’s enabling larger models, and higher throughput in decision making processes.

Outside of the immediate AI-hype, one area still remains somewhat overlooked: AI needs data (find out more here). First and foremost, storage systems need to provide high performance access to ever growing datasets, but more importantly they need to ensure that this data is securely stored, not just for the present, but also for the future.

There are multiple different types of data used in typical AI systems:

  • Raw and pre-processed data
  • Training data
  • Models
  • Results

All of this data takes time and computational effort to collect, process and output, and as such need to be protected. In some cases, like telemetry data from a self-driving car, this data might never be able to be reproduced.  Even after training data is used to create a model, its value is not diminished; improvements to models require consistent training data sets so that any adjustments can be fairly benchmarked.

Raw, pre-processed, training and results data sets can contain personally identifiable information and as such steps need to be taken to ensure that it is stored in a secure fashion. And more than just the moral responsibility of safely storing data, there can be significant penalties associated with data breaches.

Challenges with securely storing AI data

We covered many of the risks associated with securely storing data in this blog post. The same risks apply in an AI setting as well. Afterall machine learning is another application that consumes storage resources, albeit sometimes at a much larger scale. 

AI use cases are relatively new, however the majority of modern storage systems, including the open source solutions like Ceph, have mature features that can be used to mitigate these risks.

Physical theft thwarted by data at rest encryption

Any disk used in a storage system could theoretically be lost due to theft, or when returned for warranty replacement after a failure event. By using at rest encryption, every byte of data stored on a disk, spinning media, or flash, is useless without the cryptographic keys needed to unencrypt the data. Thus protecting sensitive data, or proprietary models created after hours or even days of processing.

Strict access control to keep out uninvited guests

A key tenet of any system design is ensuring that users (real people, or headless accounts) have access only to the resources they need, and that at any time that access can easily be removed. Storage systems like Ceph use both their own access control mechanisms and also integrate with centralised auth systems like LDAP to allow easy access control.

Eavesdropping defeated by in flight encryption

There is nothing worse than someone listening into a conversation that they should not be privy to. The same thing can happen in computer networks too. By employing encryption on all network flows: client to storage, and internal storage system networks no data can be leaked to 3rd parties eavesdropping on the network.

Recover from ransomware with snapshots and versioning

It seems like every week another large enterprise has to disclose a ransomware event, where an unauthorised 3rd party has taken control of their systems and encrypted the data. Not only does this lead to downtime but also the possibility of having to pay a ransom for the decryption key to regain control of their systems and access to their data. AI projects often represent a significant investment of both time and resources, so having an initiative undermined by a ransomware attack could be highly damaging.

Using point in time snapshots or versioning of objects can allow an organisation to revert to a previous non-encrypted state, and potentially resume operations sooner.

Learn more

Ceph is one storage solution that can be used to store various AI datasets, and is not only scalable to meet performance and capacity requirements, but also has a number of features to ensure data is stored securely.  

Find out more about how Ceph solves AI storage challenges:

Find out more about Ceph here.

Additional resources

Canonical announces collaboration with Qualcomm

The collaboration will bring Ubuntu and Ubuntu Core to devices powered by Qualcomm® processors

Today Canonical, the publisher of Ubuntu, announced a collaboration with Qualcomm Technologies, Inc., the latest major System-on-Chip manufacturer and designer to join Canonical’s silicon partner program.

Through the partner program, Qualcomm Technologies will have access to a secure, open source operating system, and optimised flavour of Ubuntu for Qualcomm Technologies’ software.  In addition, optimised Ubuntu and Ubuntu Core images will be available for Qualcomm SoCs, enabling enterprises to meet their regulatory, compliance and security demands for AI at the edge and the broader IoT market with a secure operating system that is supported for 10 years. 

Security-first and AI ready

The massive growth in AI and edge computing is exciting for device manufacturers. However, it also brings considerable challenges due to cybersecurity regulations which place increased security demands on embedded devices. On top of this, devices have to be easy to adopt and use by developers, and need to remain performant. 

To help meet these challenges, Qualcomm Technologies chose to partner with Canonical to create an optimised Ubuntu for Qualcomm IoT chipsets,  giving developers an easy path to create safe, compliant, security-focused, and high-performing applications for multiple industries including industrial, robotics and edge automation.

“The combination of Qualcomm Technologies’ processors with the popularity of Ubuntu among AI and IoT developers is a game changer for the industry,” commented Dev Singh, Vice President, Business Development and Head of Building, Enterprise & Industrial Automation, Qualcomm Technologies, Inc. “The collaboration was a natural fit, with Qualcomm Technologies’s Product Longevity program complementing the 10-year enterprise security and support commitments made by Canonical.”

Ideal to speed up time to market

Canonical and Ubuntu offer Qualcomm Technologies the tools and peace of mind to meet new IoT, AI and edge computing market challenges head on. 

By placing Ubuntu and Ubuntu Core at the centre of its devices and products, Qualcomm Technologies is creating a generation of devices that will be easy for developers to use and adopt.

The collaboration between Qualcomm Technologies and Canonical will provide options to the industry to accelerate time to market and reduce development costs.  Developers and enterprises can benefit from the Ubuntu Certified Hardware program, which features a growing list of certified ODM boards and devices based on Qualcomm SoCs. These certified devices deliver an optimised Ubuntu experience out-of-the-box, enabling developers to focus on developing applications and bringing products to market. 

“Canonical’s partner programs, in conjunction with Canonical’s expertise in guiding customers navigate their AI and IoT journey, help set the industry bar for performance with robustness, security and compliance. The work to integrate and optimise Qualcomm Technologies’ software with Ubuntu will enable channel partners and manufacturers to bring Ubuntu and Ubuntu Core platforms to a wide range of devices“, said Olivier Philippe, VP for Devices Engineering at Canonical.

Join Canonical and Qualcomm at Embedded World

The collaboration between Canonical and Qualcomm Technologies kicks off at the Embedded World conference, held at the exhibition centre in Nuremberg, Germany, on 9 to 11 April 2024. 

  • Visit Canonical booth at [4-354]
  • Visit Qualcomm booth at [5-161]

To find out more about Canonical’s partnership and optimised services for IoT, edge and AI products, stop by Canonical’s booth , or visit https://ubuntu.com/internet-of-things

About Canonical 

Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. Learn more at https://canonical.com/

Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries. Qualcomm patented technologies are licensed by Qualcomm Incorporated.

Qualcomm is a trademark or registered trademark of Qualcomm Incorporated.

Canonical Delivers Secure, Compliant Cloud Solutions for Google Distributed Cloud

Today, Canonical is thrilled to announce our expanded collaboration with Google Cloud to provide Ubuntu images for Google Distributed Cloud. This partnership empowers Google Distributed Cloud customers with security-focused Ubuntu images, ensuring they meet the most stringent compliance standards.

Since 2021, Google Cloud, with its characteristic vision, has built a strong partnership with Canonical. This collaboration highlights both companies’ commitment to providing customers with the air-gapped cloud solutions they need. Through this partnership, Google Cloud demonstrates its strategic brilliance – delegating foundational image creation and maintenance to Canonical’s expertise, allowing Google Cloud to focus on the heart of Google Distributed Cloud development. Canonical’s dedication to rigorous testing upholds the reliability that data centers demand. Moreover, proactive support helps swiftly tackle critical issues, ensuring seamless data center operations. This partnership is a testament to the power of strategic collaborations in the tech sector:

  • GDC Ready OS Images: Canonical supports multiple active releases of Google Distributed Cloud (1.9.x, 1.10.x, 1.11.x, and 1.12.x) ensuring Google Cloud has flexibility and choice.
  • Risk Mitigation: Canonical employs a two-tiered image system–”development” and “stable.” This allows for thorough testing of changes before they are released into the stable production environment, minimizing potential problems.

These key benefits are the result of our unwavering pursuit of progress and innovation. Google Distributed Cloud customers can expect to reap the rewards of our continuous hard work:

  • FIPS & CIS Compliance: Google Distributed Cloud customers operating in highly regulated industries can confidently deploy FIPS-compliant and CIS-hardened Ubuntu images, knowing they adhere to critical security standards.
  • Multi-distro Support: Ubuntu’s adaptability allows Google Distributed Cloud users to run a diverse range of distro images, maximizing their choice and flexibility within the cloud environment.
  • Air-gapped Innovation: Canonical and Google Cloud are dedicated to supporting air-gapped cloud technology, providing secure, cutting-edge solutions for customers with even the most sensitive data requirements.

At Canonical, we’re committed to open-source innovation. This collaboration with Google Cloud is a prime example of how we can work together to deliver industry-leading cloud solutions to our customers. We look forward to continued partnership and providing even more value to the Google Distributed Cloud ecosystem.

❌