DIGITALOCEAN CLOUD COMPUTING GUIDE
Summary by Damian Ndunda © 2020
TABLE OF CONTENTS
DIGITALOCEAN CLOUD COMPUTING GUIDE. 1
CHAPTER: RUNNING CLOUD NATIVE APPLICATIONS ON DIGITALOCEAN KUBERNETES. 7
Kubernetes and DigitalOcean Kubernetes. 7
1 TRENDS IN MODERN APPLICATION DEVELOPMENT. 7
What makes an application Cloud Native?. 8
Monolithic vs. Microservices Architectures. 9
Containers vs. Virtual Machines. 10
Build and Ship: Dockerizing an Application. 11
Health Checking and State Management 13
KUBERNETES DESIGN OVERVIEW... 14
Scaling, Updating and Rolling Back: Kubernetes Deployments. 15
Exposing your Application: Kubernetes Services. 15
Pod Management: Kubernetes Node Agents. 15
Control and Schedule: Kubernetes Control Plane. 16
Minimizing Day 2 Cost: Managed Operations and Maintenance. 18
Kubernetes in the DigitalOcean Cloud. 18
Harness the Kubernetes Ecosystem.. 19
Simple, Transparent Pricing. 19
CHAPTER: CONVERT DIGITAL OCEAN DROPLET TO VMWARE VM... 20
CHAPTER: DIGITAL OCEAN PRICING.. 24
PRICING FOR BASIC AND PROFESSIONAL TIERS. 29
Can you provide an example of how App Platform pricing works?. 30
How much outbound transfer do I get with App Platform?. 30
What happens if I exceed the usage limits?. 30
How much does the Starter tier cost?. 31
Are there any compute resources in the Starter tier?. 31
Are there any free resources in the Basic and Professional tiers?. 31
What is a development database?. 31
What forms of payment do you accept?. 33
When will my card be charged?. 33
Am I charged when I enter my credit card?. 33
Why am I billed for powered off Droplets?. 34
I got a $100 credit when I opened an account. When will my card be charged?. 34
What’s the price for the Marketplace 1-Click Apps?. 34
How do I remove my card from the account?. 34
Can I prepay for my resources?. 34
How do you calculate costs in the pricing calculator?. 34
FOREWORD
Traditionally, software releases follow a time-based schedule, but it has become increasingly common to see applications and services continuously delivered and deployed to users throughout the day. This truncating of the traditional software release cycle has its roots both in technological developments — such as the explosive growth of cloud platforms, containers, and microservices-oriented architectures — as well as cultural developments — with tech-savvy and mobile-enabled users increasingly expecting new features, fast bug fixes, and a responsive and continuously developing product
This accelerated development cadence often accompanies the packaging of applications into containers, and the use of systems that automate their deployment and orchestration, like Docker Swarm, Marathon, and Kubernetes.
CHAPTER: RUNNING CLOUD NATIVE APPLICATIONS ON DIGITALOCEAN KUBERNETES
DigitalOcean is a cloud services platform delivering the simplicity developers love and businesses trust to run production applications at scale. It provides highly available, secure and scalable compute, storage and networking solutions that help developers build great software faster. Founded in 2012 with offices in New York and Cambridge, MA
Kubernetes and DigitalOcean Kubernetes
Kubernetes, initially open-sourced by Google in 2014, With its simplicity and developer-friendly interfaces, DigitalOcean Kubernetes empowers developers to launch their containerized applications into a managed, production-ready cluster without having to maintain and configure the underlying infrastructure. Seamlessly integrating with the rest of the DigitalOcean suite — including Load Balancers, Firewalls, Object Storage Spaces, and Block Storage Volumes — and with built-in support for public and private image registries like Docker Hub and Quay.io, developers can now run and scale container-based workloads with ease on the DigitalOcean platform.
1 TRENDS IN MODERN APPLICATION DEVELOPMENT
Designing applications that will be rapidly and continuously deployed into cloud environments has led to the development of new software methodologies like “Cloud Native” and “Twelve Factor.”
Such frameworks build on recent developments in software engineering like containers, microservices-oriented architectures, continuous integration and deployment, and automated orchestration.
TWELVE FACTOR
I. Codebase
II. Dependencies
III. Config
IV. Backing services
V. Build, release, run
VI. Processes
VII. Port binding
VIII. Concurrency
IX. Disposability
X. Dev/prod parity
XI. Logs
XII. Admin processes
Synthesizing extensive experience developing and deploying apps onto their cloud PaaS, Heroku constructed a framework for building modern applications consisting of 12 development guidelines, conceived to increase developers’ productivity and improve the maintainability of applications. As PaaS providers abstract away all layers beneath the application, it is important to adapt the packaging, monitoring, and scaling of apps to this new level of abstraction. The Twelve Factors allow a move towards declarative, self-contained, and disposable services. When effectively leveraged, they form a unified methodology for building and maintaining apps that are both scalable and easily deployable, fully utilizing managed cloud infrastructure.
CLOUD NATIVE
Cloud Native apps are containerized, segmented into microservices, and are designed to be dynamically deployed and efficiently run by orchestration systems like Kubernetes.
What makes an application Cloud Native?
To effectively deploy, run, and manage Cloud Native apps, the application must implement several Cloud Native best practices. For example, a Cloud Native app should:
· Expose a health check endpoint so that container orchestration systems can probe application state and react accordingly
· Continuously publish logging and telemetry data, to be stored and analyzed by systems like Elasticsearch and Prometheus for logs and metrics, respectively
· Degrade gracefully and cleanly handle failure so that orchestrators can recover by restarting or replacing it with a fresh copy
· Not require human intervention to start and run
Cloud Native Computing Foundation (CNCF) was created under the umbrella of the Linux Foundation to foster the growth and development of high-quality projects like Kubernetes. Examples of other CNCF projects include Prometheus, a monitoring system and time-series database often rolled out alongside Kubernetes; and FluentD, a data and log collector often used to implement distributed logging in large clusters.
In its current charter, the CNCF defines three core properties that underpin Cloud Native applications:
§ Packaging applications into containers: “containerizing”
§ Dynamic scheduling of these containers: “container orchestration”
§ Software architectures that consist of several smaller loosely-coupled and independently deployable services: “microservices”
2 MICROSERVICES
Microservices is a software architecture style that advocates for many granular services that each perform a single business function. Each microservice is a self-contained, independently deployable piece of a larger application that interacts with other components, typically via well-defined REST APIs.
Monolithic vs. Microservices Architectures
microservices provide several advantages over Large, multi-tier software monoliths model:
· They can be scaled individually and on demand
· They can be developed, built, tested and deployed independently
· Each service team can use its own set of tools and languages to implement features, and grow at its own rate
· Any individual microservice can treat others as black boxes, motivating strongly communicated and well-defined contracts between service teams
· Each microservice can use its own data store which frees teams from a single overarching database schema
Large, multi-tier software monoliths must be scaled as a cohesive whole, slowing down development cycles. Microservices enable faster development iteration, and increased flexibility in running applications on cloud infrastructure.
3 CONTAINERS
What are Containers?
Containers are a way of packaging applications with all of their required dependencies and libraries in a portable and easily deployable format. Once launched, these packages provide a consistent and predictable runtime environment for the containerized application. Taking advantage of Linux kernel isolation features such as cgroups and namespaces, container implementations — or runtimes — provide a sandboxed and resource-controlled running environment for applications.
Containers vs. Virtual Machines
Compared to virtual machines, containers are more lightweight and require fewer resources because they encapsulate fewer layers of the operating system stack. Both provide resource-limited environments for applications and all their software dependencies to run, but since containers share the host’s OS kernel and do not require separate operating systems, they boot in a fraction of the time and are much smaller in size.
Container Runtimes
Docker is the most mature, widely supported, and common format, embedded into most container orchestration systems.
Open Container Initiative (OCI), a Linux Foundation project, has worked to standardize container formats and runtimes
Containerizing an application using Docker first involves writing a container image manifest called a Dockerfile. This file describes how to build a container image by defining the starting source image and then outlining the steps required to install any dependencies (such as the language runtime and libraries), copy in the application code, and configure the environment of the resulting image.
Developers or build servers then use the Docker container runtime to build these dependencies, libraries, and application sources into a binary package called a Docker image. Docker images are built in ordered layers, are composable, and can be reused as bases for new images. Once built, these images can be used to start containers on any host with a Docker container runtime
A given Docker container:
· Implements some narrow piece of business or support logic
· Explicitly declares all of its software dependencies in a Dockerfile
· Is extremely portable across cloud providers as long as it has the requisite resources and a
· Docker runtime
· Deploys quickly to replace a failed running container of the same type
· Replicates easily to accommodate the additional load on a heavily requested business function by launching additional container instances
Once your team’s application has been neatly packaged into a set of microservice containers, each performing some unit of business functionality, you should consider the following questions:
ü How do you then deploy and manage all of these running containers?
ü How do these containers communicate with one another, and what happens if a given container fails and becomes unresponsive?
ü If one of your microservices begins experiencing heavy load, how do you scale the number of running containers in response, assigning them to hosts with resources available?
4 CLUSTERS
Container orchestration systems were designed to reduce some of the operations overhead by abstracting away the underlying infrastructure and automating the deployment and scaling of containerized applications. Systems such as Kubernetes, Marathon and Apache Mesos, and Swarm simplify the task of deploying and managing fleets of running containers by implementing some or all of the following core functionality:
· Health Checking and State Management · Autoscaling · Rolling Deployments · Declarative Configuration
|
· Container Scheduling · Load Balancing · Service Discovery · Cluster Networking
|
Container Scheduling
A scheduler manages allocating the desired resources (like CPU and memory) and assigns the containers to cluster member nodes with these resources available.
Load Balancing
Manage the distribution of requests from both internal and external sources.
Service Discovery
Service discovery exposes apps to one another and external clients in a clean and organized fashion using either DNS or some other mechanism, such as local environment variables.
Cluster Networking
Clusters also need to connect running applications and containers to one another across machines, managing IP addresses and assignment of network addresses to cluster members and containers.
Health Checking and State Management
This allows orchestrators to reliably check the state of running applications and only direct traffic towards those that are healthy. Also using this endpoint, orchestrators repeatedly probe running apps and containers for “liveness” and self-heal by restarting those that are unresponsive.
Autoscaling
Container orchestrators handle scaling applications by monitoring standard metrics such as CPU or memory use, as well as user-defined telemetry data. The orchestrator then increases or decreases the number of running containers accordingly. Some orchestration systems also provide features for scaling the cluster and adding additional cluster members should the number of scheduled containers exceed the amount of available resources.
Rolling Deployments
Container orchestration systems also implement functionality to perform zero-downtime deploys. Systems can roll out a newer version of an application container incrementally, deploying a container at a time, monitoring its health using the probing features described above, and then killing the old one. They also can perform blue-green deploys, where two versions of the application run simultaneously and traffic is cut over to the new version once it has stabilized. This also allows for quick and painless rollbacks, as well as pausing and resuming deployments as they are carried out.
Declarative Configuration
The user “declares” which desired state they would like for a given application (for example, four running containers of an NGINX web server), and the system takes care of achieving that state by launching containers on the appropriate members, or killing running containers. This declarative model enables the review, testing, and version control of deployment and infrastructure changes.
Rolling out cluster software to manage your applications often comes with the cost of provisioning, configuring, and maintaining the cluster. Managed container services like DigitalOcean Kubernetes can minimize this cost by operating the cluster Control Plane and simplifying common cluster administration tasks like scaling machines and performing cluster-wide upgrades.
Kubernetes and its expanding ecosystem of Cloud Native projects have become the platform of choice for managing and scheduling containers.
5 KUBERNETES
The Kubernetes container orchestration system was born and initially designed at Google by several engineers who architected and developed Google’s internal cluster manager Borg.
KUBERNETES DESIGN OVERVIEW
Kubernetes is a container cluster, a dynamic system that manages the deployment, management, and interconnection of containers on a fleet of worker servers. These worker servers where containers run are called Nodes and the servers that oversee and manage these running containers are called the Kubernetes Control Plane.
Containers and Pods
It’s important to note here that the smallest deployable unit in a Kubernetes cluster is not a container but a Pod. A Pod typically consists of an application container (like a Dockerized Express/Node.js web app), or an app container and any “sidecar” containers that perform some helper function like monitoring or logging. Containers in a Pod share storage resources, a network namespace, and port space. A Pod can be thought of as a group of containers that work together to perform a given function. They allow developers to ensure that these sets of containers are always scheduled onto Nodes together.
Scaling, Updating and Rolling Back: Kubernetes Deployments
Pods are typically rolled out using Deployments, which are objects defined by YAML files that declare a particular desired state. For example, an application state could be running three replicas of the Express/Node.js web app container and exposing Pod port 8080. Once created, a controller on the Control Plane gradually brings the actual state of the cluster to match the desired state
Declared in the Deployment by scheduling containers onto Nodes as required. Using Deployments, a service owner can easily scale a set of Pod replicas horizontally or perform a zero-down time rolling update to a new container image version by simply editing a YAML file and performing an API call (e.g. by using the command line client kubectl). Deployments can quickly be rolled back, paused, and resumed.
Exposing your Application: Kubernetes Services
Once deployed, a Service can be created to allow groups of similar deployed Pods to receive traffic (Services can also be created simultaneously with Deployments). Services are used to grant a set of Pod replicas a static IP and configure load balancing between them using either cloud provider load balancers or user-specified custom load balancers. Services also allow users to leverage cloud provider firewalls to lock down external access.
Pod Management: Kubernetes Node Agents
To start and manage Pods and their containers on worker machines, Nodes run an agent process called kubelet which communicates with a kube-apiserver on the Control Plane. Using a contain er runtime like Docker, also running on Nodes, these scheduled containers are first pulled as images from either a private or public image registry, and then created and launched. Nodes also run a kube-proxy, which manages network rules on the host.
Control and Schedule: Kubernetes Control Plane
The Kubernetes Control Plane oversees the Nodes and manages their scheduling and maintains their workloads. It consists of the kube-apiserver front-end, backed by the key-value store etcd to store all the cluster data. Finally, a kube-scheduler schedules Pods to Nodes, and a set of control lers continuously observe the state of the cluster and drive its actual state towards the desired state.
Software teams can spend more time building applications and less time managing integration, deployment, and the infrastructure that apps run on.
6 DIGITALOCEAN KUBERNETES
Rolling your own production-ready Kubernetes cluster often involves several time-consuming and costly steps: provisioning the underlying compute, storage, and networking infrastructure for the Control Plane and Nodes; bootstrapping and configuring Kubernetes components like the etcd cluster and Pod networking; and thoroughly testing the cluster for resiliency towards infrastructure failures.
Kubernetes clusters need to be managed and monitored by DevOps teams, while routine maintenance tasks like upgrading the cluster or the underlying infrastructure requires manual intervention by engineers.
Simple, Flexible Scaling
Regardless of the size or number of running applications, cluster creation and operation remains simple via a streamlined web interface and REST API, allowing developers to quickly launch and scale a managed Kubernetes cluster.
Minimizing Day 2 Cost: Managed Operations and Maintenance
Automated cluster upgrades and Control Plane backup and recovery further reduce operations and day-to-day management overhead. DigitalOcean Kubernetes clusters self-heal automatically — Control Plane and Node health are continuously monitored, and recovery and Pod rescheduling occurs in the background, preventing unnecessary and disruptive off-hours alerts.
Kubernetes in the DigitalOcean Cloud
DigitalOcean Kubernetes integrates seamlessly with other DigitalOcean infrastructure products, bringing in all the cloud primitives needed to handle scaling and securing your team’s applications. When creating a Service to expose your app, DigitalOcean Kubernetes can automatically provision a Load Balancer and route traffic to the appropriate Pods. Additionally, you’ll be able to set up a DigitalOcean Firewall to lock down and restrict web traffic to your running applications. Finally, DigitalOcean Block Storage Volumes can be used as PersistentVolumes to provide non-ephemeral and highly available shared storage between containers.
Harness the Kubernetes Ecosystem
Developers have complete control over workload deployment, scaling, and monitoring. Container images can be pulled directly from public and private registries like Docker Hub and Quay.io, granting teams complete flexibility in designing and implementing continuous integration and deployment pipelines. With this exposed API, developers can also benefit from the rich ecosystem of third-party Kubernetes tools
Simple, Transparent Pricing
Software teams can maximize their cloud resource utilization and accurately forecast spend with transparent, predictable pricing. Paying only for running Nodes, teams have their bandwidth pooled at the account level, making DigitalOcean Kubernetes a market leading price-to-performance container platform.
DigitalOcean Kubernetes reduces time to market and facilitates the scaling of products and iteration of features.
CHAPTER: CONVERT DIGITAL OCEAN DROPLET TO VMWARE VM
This guide explains a method for converting a Digital Ocean Droplet to a VMDK which can be used under VMware ESXi Hypervisor or other virtualization software. This process is one way. It is currently impossible to convert a VMDK to a Digital Ocean Droplet.
Requirements:
1. Root access to the Digital Ocean Droplet.
2. Password for Root on the Digital Ocean Droplet.
3. Destination Storage Location with SSH access enabled.
4. Destination Storage Location with 'qemu-utils' installed.
5. Favorite Live Linux Distribution (Procedure uses Ubuntu Desktop) using either an Existing Linux VM or a Live Boot Capable ISO
Procedure:
1. Log into the Digital Ocean Droplet.
2. Prepare the Droplet for Backup.
a. Reset root's password if you do not already know it.
b. Change to runlevel 1 if possible and enter root's password to enter maintenance mode. Otherwise skip to step 3.
i. Stop the rsyslog, udev, and dbus daemons.
initctl --system stop rsyslog
initctl --system stop udev
initctl --system stop dbus
It is normal to receive an error from initctl when stopping dbus. The above examples assume the Droplet is using upstart. Debian snapshots may not be using upstart.
3.
Use DD to zero out any deleted data on the partition, so that compression size
of the backup is smaller.
4
Use DD to byte copy the Digital Ocean partition, feeding it into gzip, and then
transfer it over SSH to the Storage Location
5.
Extract the gzipped image.
At this point, you could manually mount the dd image file with 'losetup /dev/loop[0-7] /storage/location/snapshot.image' and then mount /dev/loop[0-7] to the fs. This is useful for then extracting files from the images or modifying it in any way before converting it to a VMDK.
6.
Convert the DD image to a Virtual Machine Disk (VMDK) with the 'qemu-img'
utility.
The purpose of converting this to a VMDK is to make it easily accessible from a VM. You can skip converting this to a VMDK and mount the DD image directly inside a VM (from the host's filesystem) by mapping it to a loopback device.
7. Create a new Virtual Machine (VirtualBox, VMware, etc.) with a new virtual disk that is at least 2GB larger than the DD image size.
On the ESXi Shell, a new VMDK can be created with the 'vmkfstools' utility to avoid creating a new Virtual Machine. Assign the newly created VMDK to an existing Virtual Machine instead.
8. With both the Converted DD Image VMDK and the Freshly Created VMDK attached to a VM, boot to a Linux environment (verified with
Ubuntu 13.10 Desktop Live Environment) to transfer the partition.
For the purposes of this procedure, we will assume the following:
a. /dev/sda Is the Converted DD Image VMDK
b. /dev/sdb Is the Freshly Created VMDK which is at least 2GB bigger than /dev/sda
c. We are using Ubuntu for the partition transfer.
9. Partition /dev/sdb
a. /dev/sdb1 Maximum Partition Size - ext4
10. Using DD, transfer byte-for-byte /dev/sda to /dev/sdb1.
dd if=/dev/sda of=/dev/sdb1 conv=notrunc,noerror
11. Disconnect the Converted DD Image VMDK from the VM (/dev/sda)
12. Using FSCK, check the integrity of the newly copied partition.
fsck -t ext4 /dev/sdb1
13
Mount /dev/sdb1 to delete any remnant GRUB files from Digital Ocean.
Unmount when finished
14.
Install GRUB to /dev/sdb to make the new VMDK bootable.
15.
a Reboot the Virtual Machine to the new VMDK, at this point, the Droplet should
now boot!
For additional cleanup to make booting a little more sane perform the
following:
CHAPTER: DIGITAL OCEAN PRICING
Basic Droplets
Droplets are virtual machines available in multiple configurations of CPU, memory and SSD.
Balanced virtual machines with a healthy amount of memory tuned to host and scale applications like blogs, web apps, testing and staging environments, in-memory caching, and databases.
Memory |
vCPUs |
Transfer |
SSD Disk |
$/HR |
$/MO |
1GB |
1vCPU |
1TB |
25GB |
$0.007 |
$5 |
2GB |
1vCPU |
2TB |
50GB |
$0.015 |
$10 |
2GB |
2vCPUs |
3TB |
60GB |
$0.022 |
$15 |
4GB |
2vCPUs |
4TB |
80GB |
$0.030 |
$20 |
8GB |
4vCPUs |
5TB |
160GB |
$0.060 |
$40 |
16GB |
8vCPUs |
6TB |
320GB |
$0.119 |
$80 |
General Purpose Droplets
Virtual machines with a healthy balance of memory and dedicated compute hyper-threads from best-in-class processors. Designed for the widest range of mainstream or production workloads, including web application hosting, e-commerce sites, medium-sized databases, and enterprise applications.
Memory |
vCPUs |
Transfer |
SSD |
$/HR |
$/MO |
8GB |
2vCPU |
4TB |
25GB |
0.089 |
$60 |
16GB |
4vCPUs |
5TB |
50GB |
$0.179 |
$120 |
32GB |
8vCPUs |
6TB |
100GB |
$0.357 |
$240 |
64GB |
16vCPUs |
7TB |
200GB |
$0.714 |
$480 |
128GB |
32vCPUs |
8TB |
400GB |
$1.429 |
$960 |
160GB |
40vCPUs |
9TB |
500GB |
$1.786 |
$1,200 |
CPU-Optimized Droplets
Designed for CPU-intensive applications like CI/CD, video encoding, machine learning, ad serving, batch processing, and active front-end web servers.
Memory |
vCPUs |
Transfer |
SSD Variant |
SSD |
$/HR |
$/MO |
4GB |
2vCPUs |
4TB |
1 |
25GB |
$0.060 |
$40 |
4GB |
2vCPUs |
4TB |
2 |
50GB |
$0.074 |
$45 |
8GB |
4vCPUs |
5TB |
1 |
50GB |
$0.119 |
$80 |
8GB |
4vCPUs |
5TB |
2 |
100GB |
$0.134 |
$90 |
16GB |
8vCPUs |
6TB |
1 |
100GB |
$0.238 |
$160 |
16GB |
8vCPUs |
6TB |
2 |
200GB |
$0.268 |
$180 |
32GB |
16vCPUs |
7TB |
1 |
200GB |
$0.476 |
$320 |
32GB |
16vCPUs |
7TB |
2 |
400GB |
$0.536 |
$360 |
64GB |
32vCPUs |
9TB |
1 |
400GB |
$0.952 |
$640 |
64GB |
32vCPUs |
9TB |
2 |
800GB |
$1.071 |
$720 |
Memory-Optimized Droplets
These Droplets are built for RAM-intensive applications like high-performance databases and real-time big data processing.
Memory |
vCPUs |
Transfer |
SSD Variant |
SSD |
$/HR |
$/MO |
16GB |
2vCPUs |
4TB |
1x |
50GB |
$0.134 |
$90 |
16GB |
2vCPUs |
4TB |
3x |
150GB |
$0.164 |
$110 |
32GB |
4vCPUs |
6TB |
1x |
100GB |
$0.268 |
$180 |
32GB |
4vCPUs |
6TB |
3x |
300GB |
$0.327 |
$220 |
64GB |
8vCPUs |
7TB |
1x |
200GB |
$0.536 |
$360 |
64GB |
8vCPUs |
7TB |
3x |
600GB |
$0.655 |
$440 |
128GB |
16vCPUs |
8TB |
1x |
400GB |
$1.071 |
$720 |
128GB |
16vCPUs |
8TB |
3x |
1200GB |
$1.310 |
$880 |
192GB |
24vCPUs |
9TB |
1x |
600GB |
$1.607 |
$1080 |
192GB |
24vCPUs |
9TB |
3x |
1800GB |
$1.964 |
$1320 |
256GB |
32vCPUs |
10TB |
1x |
800GB |
$2.143 |
$1440 |
256GB |
32vCPUs |
10TB |
3x |
2400GB |
$2.619 |
$1760 |
Block storage
Attach additional SSD-based Block Storage volumes to your Droplet to suit your database, file storage, application, service, mobile, and backup needs. Pricing starts at $0.10/GiB per month.
Backups
The pricing for backups is 20% of the cost of your virtual machine. So, for example, if you want to enable backups for a $5 per month Droplet, the cost of the backup will be $1 per month.
Snapshots
Snapshots are charged at a rate of $0.05/GiB per month. Pricing is based on the size of the snapshot, not the size of the filesystem being saved. There is no additional charge for making a snapshot available in multiple regions.
Kubernetes
DigitalOcean Kubernetes provides the control plane for free, unlike other services that charge a management fee. This includes compute and storage infrastructure and management for processes like etcd, kube-apiserver, kube-controller-manager, kube-scheduler, cloud-controller-manager, and other services for Kubernetes cluster management.
Customers are billed for the underlying resources used by their Kubernetes worker nodes, which could include Droplets, Block Storage, and Load Balancers. A Kubernetes cluster can be deployed for as little as $10 per month. Only public outgoing transfers are considered for bandwidth billing. Transfer limits are calculated by pooling the transfer from all droplets on the account. Overages above pooled transfer will be charged at a rate of $0.01/GB.
The monthly price advertised is the max amount per month, the actual cost will depend on the number of node hours consumed within the billing period.
APP PLATFORM
App Platform provides a simple, intuitive, and visually rich experience to rapidly build, deploy, manage, and scale apps. We provision and manage infrastructure, operating systems, databases, application runtimes, and other dependencies. This means you can go from code to production in just minutes. App Platform has three pricing tiers namely Starter, Basic and Professional. The capabilities and pricing for each tier are listed below.
Starter
· Try App Platform and deploy static sites · Starts at $0 / month * · Build static sites · Deploy hand coded or pre-generated HTML, CSS, JS, and icons · Deployment from GitHub · Automatic HTTPS · All apps are always https encrypted · Bring your custom domain · Global CDN
|
· Cloudflare for global, high-performace content delivery · DDoS mitigation · Unlimited team members · Outbound transfer – 1GiB per app · Build minutes – 100/mo · May incur overages in the future · You can build and deploy 3 static sites for free. Every additional static site will be charged $3/mo.
|
Basic
· Prototype your apps · Starts at $5 / month · Build static sites · Deploy hand coded or pre-generated HTML, CSS, JS, and icons · Build and deploy dynamic apps (e.g. Node.js, Python, Go, Ruby, PHP, Docker) · Deployment from GitHub · Automatic HTTPS · All apps are always https encrypted · Bring your custom domain · Global CDN · Cloudflare for global, high-performace content delivery · DDoS mitigation
|
· Unlimited team members · Application metrics – Hourly · CPU – Shared · Auto OS patching · Vertical scaling · Manually adjust the size of your container · Outbound transfer – 40GiB per app · Build minutes – 400/mo · May incur overages in the future
|
Professional
· Deploy your production apps · Starts at $12 / month · Build static sites · Deploy hand coded or pre-generated HTML, CSS, JS, and icons · Build and deploy dynamic apps (e.g. Node.js, Python, Go, Ruby, PHP, Docker) · Deployment from GitHub · Automatic HTTPS · All apps are always https encrypted · Bring your custom domain · Global CDN · Cloudflare for global, high-performace content delivery · DDoS mitigation
|
· Unlimited team members · Application metrics – Per-minute · CPU – Shared & dedicated · Auto OS patching · Vertical scaling · Manually adjust the size of your container · Horizontal scaling · Scale out to meet high traffic demands · High availability · Run multiple instances of your container for redundancy · Outbound transfer – 100 GiB per app · Build minutes – 1000/mo
|
If your dynamic app has static site components then all of them will be deployed at no additional charge on the Basic and Professional tiers. This is in addition to the 3 free static sites that you get as part of the Starter tier.
PRICING FOR BASIC AND PROFESSIONAL TIERS
An app is typically made up of one or more components like web service, database, workers, etc. When you run an app, we deploy an instance (container) of each of the components. The table below shows the monthly pricing per instance.
CPUs |
RAM |
Basic Tier (price/mo) |
Professional Tier (price/mo) |
|
|
|
|
1 |
512 MiB |
$5 |
✕ |
1 |
1 GiB |
$10 |
$12 |
1 |
2 GiB |
$20 |
$25 |
2 |
4 GiB |
$40 |
$50 |
1 Dedicated |
4 GiB |
✕ |
$75 |
2 Dedicated |
8 GiB |
✕ |
$150 |
4 Dedicated |
16 GiB |
✕ |
$300 |
Add-on pricing
If you need additional resources than what are included in the Starter, Basic, and Professional tiers, then the following prices will apply.
Product |
Quantity |
Price/mo |
Managed Databases |
Varies |
|
Development Database |
1 (256 MB) |
$7 |
Additional outbound transfer |
1 GiB |
$0.10 |
Object storage (Spaces) |
Varies |
|
APP PLATFORM FAQ
Can you provide an example of how App Platform pricing works?
An app is typically made up of one or more components e.g. static assets, dynamic components like web service, workers, etc. When you run an app, we deploy an instance (container) of each of the components. For example: If an app is a static site, it will be deployed for free on the Starter tier. You can build and deploy 3 static sites for free on the Starter tier. Every additional static site will be charged $3/mo.
If your app is dynamic and has components like web service, background worker, and database, then you should select the Basic or Professional tier. You can then select the size of the instance (container) that will run the components.
How much outbound transfer do I get with App Platform?
You get outbound transfer bandwidth for every app you deploy using App Platform.
Tier – |
outbound transfer / month |
Starter – |
1GiB / Static site |
Basic – |
40 GiB per app |
Professional – |
100 GiB per app |
Your outbound transfer is pooled at the account level. For example, if you deploy 3 static sites, 2 apps on Basic tier and 1 app on Professional tier, then the total outbound transfer for your account will be 183 GiB per month (i.e. 3 GiB from the 3 static sites, 80 GiB from the 2 apps on Basic tier and 100 GiB from the 1 app on Professional tier).
What happens if I exceed the usage limits?
You can build and deploy 3 static sites for free on the Starter tier. Every additional static site will be charged $3/mo.We may charge you for overages if you exceed the limits on outbound transfer.
Resources – |
Overage charge |
outbound transfer – |
$0.10 / GiB |
How much does the Starter tier cost?
The Starter tier is ideal to try App Platform and deploy static sites. You can build and deploy 3 static sites for free and for every additional static site, you would be charged $3/month. You also get 1GiB of outbound transfer per month for every static site. If you exceed this usage limit, we may charge you for overages.
Are there any compute resources in the Starter tier?
No, if your app needs compute resources, you would need to upgrade to the Basic or Professional tier.
Are there any free resources in the Basic and Professional tiers?
Yes. If your dynamic app has static site components (e.g. www.my_dynamic_app.com/site1, www.my_dynamic_app.com/site2 ) then all of them will be deployed at no additional charge on the Basic and Professional tiers. This is in addition to the 3 free static sites that you get as part of the Starter tier.
What is a development database?
A development database has 256MB RAM. It has more limited capabilities than a Managed Database. It can only be used from the app it belongs to, it is not backed up by default, it does not support multi-database creation – and if the app is destroyed, the development database is destroyed with it.
What are build minutes?
The time required to build or rebuild an application counts towards your build minutes. Every pricing tier includes a set amount of build minutes per app per month. At this time, if you use more than your included allowance, there is no additional charge.
Managed Databases
Combine the power of our core VM platform with a fully managed MySQL, Redis, and PostgreSQL database engine to give your application the performance it needs—without the operational overhead that comes with building and running your own database server.
Memory |
vCPUs |
Disk |
Standby Nodes |
$/HR |
$/MO |
1GB |
1vCPU |
10GB |
N/A |
$0.022 |
$15 |
2GB |
1vCPU |
25GB |
0 |
$0.045 |
$30 |
2GB |
1vCPU |
25GB |
1 |
$0.074 |
$50 |
2GB |
1vCPU |
25GB |
2 |
$0.104 |
$70 |
4GB |
2vCPU |
38GB |
0 |
$0.089 |
$60 |
4GB |
2vCPU |
38GB |
1 |
$0.149 |
$100 |
4GB |
2vCPU |
38GB |
2 |
$0.208 |
$140 |
8GB |
4vCPU |
115GB |
0 |
$0.179 |
$120 |
8GB |
4vCPU |
115GB |
1 |
$0.298 |
$200 |
8GB |
4vCPU |
115GB |
2 |
$0.417 |
$280 |
16GB |
6vCPU |
270GB |
0 |
$0.357 |
$240 |
16GB |
6vCPU |
270GB |
1 |
$0.595 |
$400 |
16GB |
6vCPU |
270GB |
2 |
$0.833 |
$560 |
32GB |
8vCPU |
580GB |
0 |
$0.714 |
$480 |
32GB |
8vCPU |
580GB |
1 |
$1.190 |
$800 |
32GB |
8vCPU |
580GB |
2 |
$1.667 |
$1120 |
64GB |
16vCPU |
1.12TB |
1 |
$2.381 |
$1600 |
64GB |
16vCPU |
1.12TB |
2 |
$3.333 |
$2240 |
Spaces Object Storage
Simple and scalable S3-compatible object storage with a built-in content delivery network (CDN) to store, serve, back up, and archive any amount of web content, images, media, and static files for your web apps.
Storage |
Outbound Transfer (w/CDN) |
Add'l. GB Stored |
Add'l. GB Transferred (w/CDN) |
$/MO |
250GB |
1TB |
$0.02/GB |
$0.01/GB |
$5 |
TOOLS
Load balancers
Automatically distribute incoming traffic across your Droplets, enabling you to build more reliable, high-performance applications by creating redundancy. Load Balancers are billed hourly at $0.015 with no additional bandwidth charges. Available in all data center regions.
Floating IPs
Floating IPs are free to use as long as they are attached to a Droplet. Due to limited IPv4 availability, we charge $0.006 per hour for floating IPs that have been reserved but not assigned to a Droplet.
FAQ
What forms of payment do you accept?
We accept Visa, Mastercard, American Express, Discover, and PayPal. For additional payment options, including wire transfer, purchase orders, and ACH, please contact us.
When will my card be charged?
DigitalOcean billing cycles are monthly. Typically, on the first day of each month we invoice and automatically charge your account’s primary payment method for the previous month’s usage. In some cases, we might charge if your usage exceeds a threshold.
Am I charged when I enter my credit card?
No. Your card is only charged at the end of the billing cycle or upon exceeding a usage threshold. Pre-authorization charge: When you add a card, we may send a preauthorization request to the issuing bank. This is to verify that the card being added has been issued by the bank and that they will authorize any charges in future. These temporary pre-authorizations are typically $1 but can vary in range and are immediately canceled by us.
Why am I billed for powered off Droplets?
When you power off your Droplet, you are still billed for it. This is because your disk space, CPU, RAM, and IP address are all reserved, even while powered off. Therefore, charges are made until you destroy the instance.
I got a $100 credit when I opened an account. When will my card be charged?
Your card will be charged only after you have utilized the free credits
What’s the price for the Marketplace 1-Click Apps?
We charge you for the underlying compute on which the 1-Click Apps runs.
How do I remove my card from the account?
To remove a credit or debit card, click the "..." menu of the card, then click Delete. In the Confirm Delete Card window that opens, click Delete to remove the card. You cannot remove a card if it is the default payment method for your account or if it’s the only card left on your account.
Can I have a refund?
We do not offer refunds. If there are extenuating circumstances, contact support.
Can I prepay for my resources?
Yes, you can make pre-payments with PayPal. Pre-payments let you pay ahead of time for future resource usage. When your account balance is due, we apply pre-payments before we charge any credit or debit cards.
How do you calculate costs in the pricing calculator?
Our pricing calculator uses 672 hours (hours in a 31-day month) to calculate the monthly cost for each provider. On-demand prices are shown.
Prices are drawn from the following data centers:
Ø DigitalOcean: Prices consistent throughout all data centers
Ø GCP: US East (Iowa)
Ø AWS: US East (Ohio)
Ø Azure: US East (no state specified)
The configurations we’ve used from other providers for this comparison are as follows:
v AWS- t3.micro, t3.small, t3.medium, t3.large, t3.xlarge, t3.2xlarge, m5.4xlarge, m5.4xlarge, m5.12xlarge, m5.12xlarge, m5.12xlarge, c5.large, c5.xlarge, c5.2xlarge, c5.4xlarge, c5.9xlarge
v GCP- f1-micro, g1-micro, n1-standard-4, n1-standard-8, n1-standard-16, n1-standard-32, n1-standard-64, n1-standard-96, n1-highcpu-4, n1-highcpu-8, n1-highcpu-16, n1-highcpu-32, n1-highcpu-64
v Azure-B1S, B1MS, B2S, B2MS, B4MS, B8MS, A8m v2, A8m v2, D32 v3, D32 v3, F2s v2, F4s v2, F8s v2, F16s v2, F32s v2
The pricing calculator does not account for all available discounts. The pricing calculator does not include backups or snapshots, which incur an additional fee. DigitalOcean acknowledges that there is some variability and complexity in other providers’ pricing
INDEX
Cloud Native.............................................................................................................. 8
Cloud Native Computing Foundation (CNCF)....................................................... 9
container orchestration......................................................................................... 9
containerizing........................................................................................................... 9
Containers............................................................................................................... 10
Deployments.......................................................................................................... 15
DigitalOcean.............................................................................................................. 7
Docker..................................................................................................................... 10
Docker image......................................................................................................... 11
Dockerfile................................................................................................................. 11
microservices............................................................................................................ 9
Microservices........................................................................................................... 9
Open Container Initiative (OCI),.......................................................................... 10
Pod........................................................................................................................... 14
scheduler................................................................................................................ 13
REFERENCES
Jetha H., Tagliaferri L.(2018) Running Cloud NativeApplications on DigitalOcean Kubernetes WHITE PAPER, DigitalOcean Inc
https://www.digitalocean.com/
Convert Digital Ocean Droplet to VMware VM
Home/ Info/ Products/ Price list/ PC Buyers Guide/ Technology Videos/ Venus Project/ Contact
Copyright BICT Solutions