Google Cloud launches fully managed serverless product Cloud Run

Google Cloud signalled its commitment to serverless computing today with the announcement of the public beta availability of Cloud Run, a fully managed environment for customers to deploy applications in a \’serverless\’ way.

Serverless is a cloud computing architecture which allows developers to focus on writing code, rather than pre-provisioning infrastructure, as the cloud provider takes care of the execution side \’as-a-service\’.

The architecture relies on functions, where developers break down their applications into small, stateless chunks, meaning they can execute without any context regarding the underlying server. Amazon Web Services has its own functions as-a-service with Lambda and Microsoft too with Azure Functions.

In a blog post published today, Eyal Manor, VP of engineering and Oren Teich, product management director at Google Cloud said: \”Traditional serverless offerings come with challenges such as constrained runtime support and vendor lock-in. Developers are often faced with a hard decision: choose between the ease and velocity that comes with serverless or the flexibility and portability that comes with containers. At Google Cloud, we think you should have the best of both worlds.\”

Previously Google customers could effectively run serverless using Google Cloud Functions, but with Run customers can \”run stateless HTTP-driven containers, without worrying about the infrastructure,\” according to the blog post. Customers are then charged per 100 milliseconds of usage, for \’true\’ pay as you go billing.

The blog adds: \”Perhaps the biggest benefit of developing applications with Google\’s approach to serverless is the ease with which you can tap into a full stack of additional services. You can build end-to-end applications by leveraging services across databases, storage, messaging, data analytics, machine learning, smart assistants, and more, without worrying about the underlying infrastructure.\”

How it works

Speaking to the press on Monday, Teich dug a little bit further under the covers, explaining that Cloud Run essentially spins up a Docker container on request, meaning developers can write and submit their code to the Cloud Run environment and GCP will take care of provisioning, configuring and managing servers, as well as auto-scaling to ensure customers genuinely pay for what they use.

\”There\’s all kinds of proprietary pieces of caching and performance enhancements we are able to do,\” Teich said, \”so Cloud Run manages to take advantage of the very best of everything we have built over the last 10 years to give this level of scale and flexibility without, really, any compromises.\”

By building Cloud Run around what Teich calls the \’industry standard\’ of Docker, developers don\’t need to learn anything new. \”Instead of pushing people to a proprietary way that Google is going to teach you to do, you can use the standard tooling … everything you have been learning for the past five years,\” he added.

This includes running on top of the recently open-sourced gVisor sandbox for secure isolation of containers and Knative, which is an \”open API and runtime environment that lets you run your serverless workloads anywhere you choose\” – that is, \”fully managed on Google Cloud Platform, on your GKE cluster, or on your own self-managed Kubernetes cluster,\” according to the blog post. Cloud Run is based on open source standards and can also be run with partners, Teich added.

Cloud Run is also being rolled out to work in tandem with the Google Kubernetes Engine (GKE), meaning customers \”can run serverless workloads on your existing GKE clusters\”. The blog post explains: \”You can deploy the same stateless HTTP services to your own GKE cluster and simultaneously abstract away complex Kubernetes concepts.\”

By running on Knative, Google is able to promise greater portability across hybrid or multi-cloud environments. The post adds: \”Thanks to Knative, it\’s easy to start with Cloud Run and move to Cloud Run on GKE later on. Or you can use Knative in your own Kubernetes cluster and migrate to Cloud Run in the future. By using Knative as the underlying platform, you can move your workloads across platforms substantially reducing switching costs.\”

Google also announced a range of partnerships with vendors like Datadog, NodeSource, GitLab, and StackBlitz to broaden the monitoring and deployment options for customers.

Customers

Teich said customers using Cloud Run in Alpha over the past year or so were running a wide range of use cases, from using it \”flat out as their application server, so they have lifted and shifted their existing web-based things they were running elsewhere\” to others \”using Cloud Functions or other functions-as-a-service options and finding the limits with them\”.

This includes scientific computing customers running complex statistical models. Others wanted a way to run serverless despite having a lot of custom languages.

For example, one launch customer is French multinational water and waste specialist Veolia. Its group chief technology officer, Hervé Dumas, is quoted in the blog post as saying: \”Cloud Run removes the barriers of managed platforms by giving us the freedom to run our custom workloads at lower cost on a fast, scalable, and fully managed infrastructure. Our development team benefits from a great developer experience without limits and without having to worry about anything.\”

Aerospace firm and aircraft manufacturer Airbus is also running Cloud Run with GKE already, and Madhav Desetty, chief software architect at Airbus said: \”With Cloud Run on GKE, we are able to run lots of compute operations for processing and streaming cloud-optimised aerial images into web maps without worrying about library dependencies, auto-scaling or latency issues.\”

Other features and constraints

The vendor also announced a set of what it says are commonly requested serverless features, such as:

● New language runtimes support such as Node.js 8, Python 3.7, and Go 1.11 ​in general availability, Node.js 10​in beta; Java 8 and Go 1.12 in alpha

● The new open-source Functions Framework, available for Node.js 10, will help you take the first step towards making your functions portable. You can now write a function, run it locally and build a container image to run it in any container-based environment.

● Serverless VPC Access, which creates a VPC connector that lets your function talk to your existing GCP resources that are protected by network boundaries, without exposing the resources to the internet. This feature allows your function to use Cloud Memorystore as well as hundreds of third-party services deployed from the GCP Marketplace. It is available in beta starting today.

● Per-function identity provides security access at the most granular function level and is now generally available.