Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Canonical Delivers Secure, Compliant Cloud Solutions for Google Distributed Cloud

9 avril 2024 à 10:55

Today, Canonical is thrilled to announce our expanded collaboration with Google Cloud to provide Ubuntu images for Google Distributed Cloud. This partnership empowers Google Distributed Cloud customers with security-focused Ubuntu images, ensuring they meet the most stringent compliance standards.

Since 2021, Google Cloud, with its characteristic vision, has built a strong partnership with Canonical. This collaboration highlights both companies’ commitment to providing customers with the air-gapped cloud solutions they need. Through this partnership, Google Cloud demonstrates its strategic brilliance – delegating foundational image creation and maintenance to Canonical’s expertise, allowing Google Cloud to focus on the heart of Google Distributed Cloud development. Canonical’s dedication to rigorous testing upholds the reliability that data centers demand. Moreover, proactive support helps swiftly tackle critical issues, ensuring seamless data center operations. This partnership is a testament to the power of strategic collaborations in the tech sector:

  • GDC Ready OS Images: Canonical supports multiple active releases of Google Distributed Cloud (1.9.x, 1.10.x, 1.11.x, and 1.12.x) ensuring Google Cloud has flexibility and choice.
  • Risk Mitigation: Canonical employs a two-tiered image system–”development” and “stable.” This allows for thorough testing of changes before they are released into the stable production environment, minimizing potential problems.

These key benefits are the result of our unwavering pursuit of progress and innovation. Google Distributed Cloud customers can expect to reap the rewards of our continuous hard work:

  • FIPS & CIS Compliance: Google Distributed Cloud customers operating in highly regulated industries can confidently deploy FIPS-compliant and CIS-hardened Ubuntu images, knowing they adhere to critical security standards.
  • Multi-distro Support: Ubuntu’s adaptability allows Google Distributed Cloud users to run a diverse range of distro images, maximizing their choice and flexibility within the cloud environment.
  • Air-gapped Innovation: Canonical and Google Cloud are dedicated to supporting air-gapped cloud technology, providing secure, cutting-edge solutions for customers with even the most sensitive data requirements.

At Canonical, we’re committed to open-source innovation. This collaboration with Google Cloud is a prime example of how we can work together to deliver industry-leading cloud solutions to our customers. We look forward to continued partnership and providing even more value to the Google Distributed Cloud ecosystem.

OpenStack with Sunbeam as an on-prem extension of the OpenStack public cloud

3 avril 2024 à 07:00

One of the biggest challenges that cloud service providers (CSPs) face these days is to deliver an extension of the public cloud they host to a small-scale piece of infrastructure that runs on customers’ premises. While the world’s tech giants, such as Amazon or Azure, have developed their own solutions for this purpose, many smaller, regional CSPs rely on open source projects like OpenStack instead. However, while OpenStack is fully suitable for powering large pools of compute and storage, shrinking it down to a small box that runs on-prem, replicating the delivery process across hundreds or thousands of customers’ sites, and operating them effectively might pose an interesting challenge.

Fortunately, there are now ways to minimise OpenStack’s footprint and accelerate its delivery. In this short blog we will showcase how you can use project Sunbeam to seamlessly deploy a small-scale OpenStack cloud and plug it in as an on-prem extension of the OpenStack public cloud you host, ensuring full API compatibility and an integration with leading Cloud Platform Management (CPM) tools.

More in this series

This blog post is part of a larger series demonstrating various use cases for project Sunbeam and OpenStack. By using practical examples, we showcase how these two technologies can be used to address real-life challenges.

Other blogs in this series:

Before we start

Before we start, let’s briefly clarify some terms that we’ll be using in this blog.

What is Sunbeam?

Sunbeam is an upstream project under the governance of the OpenInfra Foundation (OIF) created to lower the barrier to entry for OpenStack, simplify its adoption process, and set the foundation for an autonomous private cloud. Sunbeam uses cloud-native architecture and total bottom-up automation to make OpenStack more accessible to newcomers and to help users get to grips with the platform immediately.

What is MicroStack?

MicroStack (based on Sunbeam) is an OpenStack distribution designed for small-scale cloud environments. While it is available with full commercial support from Canonical, it can also be self-deployed with no friction, effectively eliminating the need for a paid consulting engagement. MicroStack currently includes core OpenStack services only but is expected to evolve quickly to ensure full feature parity with Canonical’s Charmed OpenStack soon.

OpenStack with Sunbeam as an on-prem extension of the OpenStack public cloud

Many organisations who embrace a public-cloud-only approach face a need for an on-prem infrastructure sooner rather than later. This usually stems from cost optimisation and FinOps practices, privacy concerns, or a requirement to guarantee the desired level of performance. As a result, leading public cloud providers have been already offering their own solutions in this space for years. If you are a CSP, you’ve likely heard about AWS Outposts or Azure Stack. However, finding a reasonable equivalent to those proprietary solutions in the open source space has always been a challenge.

The challenge

Most of the CSPs who offer public cloud services run OpenStack underneath them. According to the recent report by the OpenInfra Foundation, OpenStack powers more than 300 data centres that act as a regional public cloud infrastructure. This results from its unparalleled maturity, enterprise-grade stability and versatility; fourteen years after its initial release OpenStack continues to be the open source cloud platform of choice.

However, while OpenStack is fully suitable for powering those large data centres, putting it on customers’ premises might be challenging without using proper tools. This is because OpenStack is designed to deal with big clusters, rather than running on a single machine with limited hardware resources. At the same time, customers usually don’t want to over-invest: they want to start small and grow their on-prem infrastructure as they go.

Another interesting challenge is with on-going operations. OpenStack is known to be inheritably complex. Therefore, operating it in production creates significant overhead for the cloud operations team. And this is what happens when you manage just one cloud; what about if you now have to manage hundreds or thousands of clouds? The overall cost associated with running those on-prem extensions results in an ever inflating total cost of ownership (TCO).

Fortunately, dedicated tools exist to help you bypass all of these challenges.

Take a “cloud in a box” approach

With project Sunbeam CSPs can accelerate the delivery process of those on-prem extensions by taking a “cloud in a box” approach. This means shipping hardware with a super straightforward deployment procedure, or even shipping it with OpenStack pre-installed. By using cloud-native architecture underneath and running all OpenStack services inside of containers and snaps, Sunbeam effectively shrinks down the footprint of OpenStack, making it a first-class citizen on “non-beefy” machines.

The absolute minimum is just one physical node. Just note that such configuration does not ensure high availability (HA). Anyway, simply running five commands enables you to get a fully functional OpenStack up and running. This is so easy that even your non-technical customers can do it with no friction. But this is not the end! By using full automation and rich lifecycle management capabilities, Sunbeam enables the OpenStack cloud to scale out very quickly. This way your customers can always start small and grow according to their needs.

Plugging it in to your OpenStack public cloud

One of the biggest advantages of using OpenStack everywhere is that it enables you to use exactly the same cloud management software for public cloud infrastructure as well as all those extensions running on your customers’ premises. Sunbeam-based OpenStack installations rely on the pure upstream code and ensure API compatibility. Thanks to that your customers won’t see a difference when using your public cloud services or their on-prem infrastructure you provide. Exactly in the same way as Amazon customers, for example.

What’s more, you can now easily plug all those on-prem clouds to the CMP software that you use. This way you can view all the environments you manage from a single pane of glass (SPOG) dashboard. You can monitor them, check their health status, or even provision some administrative workloads centrally. This is especially useful for ongoing performance optimisation, for example,  when you have to benchmark the performance of all those distributed environments.

The final challenge to address are these on-prem environments’ ongoing operations. Let’s not forget that at the end of the day those are still OpenStack clouds. As  mentioned earlier, OpenStack is inheritably complex and its operations might pose a real challenge. However, project Sunbeam uses full automation: contrary to other OpenStack distributions, all typical post-deployment operations are fully automated, not just not the initial delivery phase of the project. This extensive automation also covers procedures, such as upgrades, which were historically very complicated. All of that to make OpenStack suitable for the scale we’re talking about.


Sample CPM dashboard

Conclusions

OpenStack’s architecture has always made the challenges of mass deploying it on-prem a distant bad dream. However, with project Sunbeam CSPs can finally break down these barriers. Its minimal footprint shrinks down OpenStack to a piece of software that can run inside of a single box. By ensuring full API compatibility OpenStack clouds deployed with project Sunbeam can be seamlessly plugged into the broader cloud ecosystem of the CSP. Finally, full automation used both during the initial installation and its post-deployment phase guarantees smooth delivery and frictionless operations.

Learn more about Sunbeam

Now that you’ve got a basic understanding of project Sunbeam and how it can be used for various use cases, you might be wondering where to find more information about it. If that’s the case, we have prepared some interesting follow-up materials for you:

Get in touch with Canonical to discuss your project plans

VMware alternatives: discover open source

27 février 2024 à 09:54

Are you looking for VMware alternatives?

Think open source – the world’s leading software portfolio. Open-source software enables you to build fully functional virtualisation and cloud infrastructure while ensuring total cost of ownership (TCO) reduction and business continuity. In this blog, we will walk you through the open source ecosystem. We will help you understand how it differs from other VMware alternatives by answering five common questions.

What is open source?

Open source is a generic term for any software released under a licence that allows its unlimited redistribution and modification. It is available for everyone, people can use it free of charge, and everyone can contribute to its development. Unlike VMware software or its proprietary alternatives, there is no single entity that owns open source. Instead, it is usually created under the governance of independent foundations. Those associate individuals, universities, research institutions and large enterprises from various parts of the world.

So you can think of open source as a collection of software meeting those criteria. There is no single place where this “collection” is hosted, however. Open-source software is distributed across numerous code repositories on GitHub, SourceForge, Launchpad, etc. Fortunately, leading Linux distributions provide streamlined access to this software. By making applications and infrastructure components available in the form of software packages, they serve as open source integration platforms.

Ubuntu, published by Canonical, is the world’s leading open source integration platform  Preferred by 66% of developers and endorsed by executives, Ubuntu powers one-third of all Linux-based web servers worldwide and its market share in the infrastructure space constantly increases. Ubuntu provides immediate access to tens of thousands of software packages and ensures a human-friendly interface to install and use open source.

Why open source over other VMware alternatives?

So obviously, open-source solutions are just one of the available VMware alternatives. Several proprietary solutions exist too. These include leading public clouds, premium versions of Proxmox Virtual Environment (VE), Citrix Hypervisor, Hyper-V, etc. What makes open source better, then?

In short, the benefits of open source can be summarised in the following five bullet points:

  • TCO reduction – since open-source software does not require attaching any expensive licences, standardising on open source leads to significant cost savings over time.
  • No vendor lock-in – with open source, you are no longer dependent on a single vendor; the software is developed by the entire community, consisting of thousands of developers.
  • Innovation advances – the development pace of open-source software is way higher than for proprietary software companies, which helps you to stay at the forefront of the technology.
  • Higher software quality – open-source software usually passes through a rigorous software development process which results in higher quality and better security.
  • Community collaboration – since billions of people worldwide use open source daily, enterprises can benefit from fantastic community collaboration through numerous industry conferences, technical forums, knowledge bases, etc.

No wonder open source is becoming the new standard. And this trend will only intensify in the following years.

Is open source suitable for enterprises?

Yes, it is. There is no reason why it wouldn’t be. All of the benefits mentioned above speak in favour of open source.

However, enterprises need not just software but all types of commercial services around it. For example, companies might not have enough time to experiment with the software. They would rather hire external consultants to deploy IT systems for them so that they could start using them immediately. Or they cannot rely solely on community support if their business applications are expected to run 24/7. 

Canonical understands those challenges and provides a complete package of optional commercial services for businesses willing to adopt open source on Ubuntu. This includes design and delivery services for open-source solutions, enterprise support, fully-managed services for both infrastructure and application, and comprehensive training courses. By partnering with Canonical, enterprises can rest assured that their migration to open source will be hassle-free and stressless.

Telcos, big banks, government institutions and leading companies in the industrial space are all examples of organisations that have successfully completed their digital transformation with open source. There is no reason why your company shouldn’t join this club.

How to build a cloud with open source?

Unlike VMware or its proprietary alternatives, there is no single open-source monolith that provides all the capabilities in a single place. Instead, several independent components exist that, added together, can serve as a cloud.

Think of it through an analogy to Lego. Let’s say that you want to build a car with Lego. There are many pieces in the box. Each piece doesn’t look like a car. However, when you start mounting them together, you will quickly see an engine, wheels, seats, etc. And even more importantly, you can choose to build a Coupe, Sedan, SUV or even a track! A car that you designed according to your needs.

The same applies when building cloud infrastructure with open source. By using various independent software components, you can build a simple virtualisation environment, an ordinary Infrastructure-as-a-Service (IaaS) cloud, a Container-as-a-Service (CaaS) cloud or even a Software-as-a-Service (SaaS) platform. Then you can extend its functionality with live migration capabilities, automated power management, observability, etc. to ensure feature parity with your existing VMware infrastructure.

How to move to open source?

Preferably by a trusted partner. Canonical provides free access to all necessary open-source components that will help you to build cloud infrastructure tailored to your needs. Moreover, the most demanding organisations can leverage Canonical’s professional services, which include analysis of existing workloads, designing the right migration strategy to avoid service downtimes and ensure business continuity, etc.

The migration away from VMware is not a trivial task. No one claims it is. However, by choosing open source over other VMware alternatives and by standardising on the right open source integration platform, you can be assured that your migration is not only going to be painless but also that your organisation will see long-term benefits, such as increased innovation and TCO reduction.

Explore migration strategies:

AI on-prem: what should you know?

30 janvier 2024 à 13:11

Organisations are reshaping their digital strategies, and AI is at the heart of these changes, with many projects now ready to run in production. Enterprises often start these AI projects on the public cloud because of the ability to minimise the hardware burden. However, as initiatives scale, organisations often look to migrate the workloads on-prem for reasons including costs, digital sovereignty or compliance requirements. Running AI on your own infrastructure comes with clear benefits, but it also raises some major challenges that infrastructure and MLOps experts need to consider.

MLOps acts as the enabler in running AI workloads in a repeatable and reproducible manner. MLOps platforms such as Charmed Kubeflow are cloud-native applications that run on Kubernetes. Building such an architecture on-prem helps organisations to easily deploy, manage and scale their AI applications.

Advantages of AI on-prem

When building their AI strategies, organisations should consider factors such as cost-effectiveness, ability to manage, security and compliance, and performance. Let’s take a look at how running AI projects on-prem addresses these priorities

AI on existing infrastructure

Building a completely new data centre for AI projects can be overwhelming and take time, but it isn’t always necessary. If you already have existing infrastructure that you aren’t fully utilising, it could be suitable for your AI initiatives. Doing AI on-prem on existing infrastructure is a great way to quickly kickstart new projects and experiments, assess the possible return on investment of different use cases, and gain additional value from your existing hardware.

Secure ML workloads on-prem

Many organisations have already well-defined internal policies that also need to be followed by any new AI initiatives. Adhering to these policies is easier using on-prem infrastructure, ensuring a secure and compliant foundation for the MLOps platform and enabling you to build repeatable and reproducible ML pipelines.  Especially in highly regulated industries, running AI on-prem could accelerate compliance and security check-ups, helping you to focus on building models, rather than security concerns.

Cost-effective solution

While public clouds nowadays offer different types of instances to run machine learning workloads, for enterprises that store all their data on their own infrastructure, moving it would come with a significant cost. You can circumvent this challenge entirely by running your AI projects in the same location that you are already storing your data. This is one of the reasons why organisations often prefer building their AI workloads on-prem

Disadvantages of AI on-prem

Building and scaling AI projects requires computing power. For organisations that need more computing power, this is a big investment to make before even getting started. At the same time, on-prem infrastructure requires a significant upfront cost and comes with the burden of operating the infrastructure post-deployment. On-prem deployments also have only a limited number of pre-trained models and ready-made services that enterprises can take advantage of. 

At the opposite end of the spectrum, public clouds are easy to get started and do not require a big investment. They have big libraries of pre-trained models, such as Amazon BedRock, that can give organisations a head-start. That being said, public clouds often prove to be less cost-effective in the long-term.

Rolling out a new strategic initiative such as an artificial intelligence project comes with a new set of challenges. When deciding whether to run your AI initiatives on-prem, there are a number of key factors you should consider to determine whether it’s the right approach for you:

When should you run AI on-prem?

  • Compute performance: It’s no secret that AI projects require significant computing power, and these requirements are only increasing. You should only commit to an on-prem AI strategy if you are certain that you have the resources to satisfy these compute demands, with room to scale. 
  • Industry regulations: Complying with industry regulations is often easier when you have full control over your data on your own hardware. If you operate in highly-regulated sectors such as healthcare or financial services, then on-prem AI is likely to be the right choice. 
  • Privacy: These same principles extend to the broader realm of data privacy, which plays an important role in any AI project. On-prem infrastructure represents a compelling option for organisations looking to maximise control over their data and ML models.
  • Initial investment: The best infrastructure option will depend largely on the budget allocated for the initial investment. If you lack the resources to support upfront hardware costs, public cloud may be more suitable – unless you have existing, unutilised on-prem infrastructure that you can take advantage of.
  • Customisable solution: Do you want a ready-made solution, or a platform that enables you to customise your AI deployment to suit your specific requirements? If you’re looking for flexibility, on-prem is the clear winner.

Open source solutions for AI on-prem

Open source is at the heart of the AI revolution. There are a growing number of open source solutions that benefit from wide adoption in the machine-learning world. Organisations can build a fully open source MLOps platform on-prem using some of the leading tools available:

  • OpenStack: a fully functional cloud platform that ensures smooth integration with leading performance acceleration devices, such as GPUs.
  • Kubernetes: can be used as a container orchestration tool.
  • Kubeflow: a MLOps platform to develop and deploy machine learning models.
  • MLflow: a machine learning platform for model registry. 

Open source tools come with plenty of benefits. However, it is important to choose the right versions. To ensure the security of the tooling as well as seamless integration, organisations need official distributions that are suitable for enterprise deployments – such as those delivered by Canonical.

Want to learn more about AI on private cloud with open source? Enroll now for our live webinar 

Hybrid strategy with open source 

According to the Cisco 2022 Global Hybrid Cloud Trends Report, 82% of IT decision-makers have adopted a hybrid IT strategy. Correlating this with all the focus that organisations put nowadays on their artificial intelligence strategy, it is easy to notice that many of the new projects will run on a hybrid cloud scenario. Open source tools – like those that Canonical supports and integrates in an end to end solution – , mentioned also before enable organisations to build and scale their AI initiatives on their cloud of choice. Users can  It helps them kickstart on a public cloud to minimise the hardware burden and then develop a hybrid cloud strategy that ensures time effectiveness and cost efficiency. 

AI webinar series

Follow our webinar series and stay up to date with the latest news from the industry.

Further reading

❌
❌