Connect own Infrastructure
Learn how to wire up your first infrastructure and build a first version of your Internal Developer Platform in 30 minutes.
In this tutorial, you will learn how to deploy a simple application to your own infrastructure using Humanitec. In order to keep things well organized and separated from your existing infrastructure, we will also explain how to create the infrastructure required. You will have to decide whether you want to use GCP, AWS, or Azure throughout this tutorial. We will always provide information for each of these public cloud platforms. We will create, connect, and use the following infrastructure in this tutorial:
  • A managed Kubernetes cluster
  • A managed database server
  • Github repositories and CI pipelines using Github Actions
We decided to not create an individual DNS but use the default DNS offered with Humanitec. You can refer to this section to learn how to use your own DNS service with Humanitec. Do not forget to delete the created infrastructure after finishing this tutorial so that you don't have to pay for it beyond the usage throughout this tutorial.
If you do not have a Humanitec account yet, head over to our website to request a free trial. We will send you instructions on how to start your free trial right away.

The Sample App

We will use the same sample application as in the Create Sample App tutorial. This sample application consists of two services: sample-app as frontend which serves a React app and sample-service as backend which stores data in a PostgreSQL database.
Architecture of sample app provided by Humanitec
We will explain how to clone the respective repositories later in this tutorial.

Create Kubernetes Cluster and Registry

We will deploy the sample application to a managed Kubernetes cluster. So the first step is to create a new Kubernetes cluster and a new container registry that we can then connect to Humanitec. We assume that you already have an account with the public cloud provider of your choice (i.e., GCP, AWS, or Azure). So here is how you create the new Kubernetes cluster and container registry:
GCP
AWS
Azure
  • Create a new project in your existing GCP account. This helps to keep all newly created infrastructure separated from the existing infrastructure in your account.
  • Select Kubernetes Engine within the Google Cloud Console. Enable the Kubernetes Engine API for the new project.
  • Create a new Kubernetes cluster. Make sure to create a GKE Standard cluster. You can go with the defaults or create a cost-optimized cluster. Remember that you will have to pay for the infrastructure you are creating. However, costs should be minimal if you delete the cluster a few days after finishing this tutorial.
  • Connect to your newly created Kubernetes cluster via the Cloud Shell and install an NGINX Ingress Controller following the steps provided here.
  • Get the external IP of your NGINX load balancer by running kubectl get services -n ingress-nginx in the Cloud Shell and note it down.
  • You now need to create a service account with the Kubernetes Engine Admin role, the Storage Admin role, and the Cloud SQL Client role and create and download a service account key in JSON format. We will need this JSON file later on. The Storage Admin role allows for pushing the first image to your GCR (Google Container Registry) and the Cloud SQL Client role allows to create new databases within the database instance which we are going to create later.
  • You do not have to explicitly create a Container Registry. Instead, it will be automatically created once you push your first image later in this tutorial.
  • Select APIs & Services within the Google Cloud Console and enable the following two APIs: Stackdriver API and Cloud Resource Manager API. Humanitec needs these APIs to run and monitor the deployments.
First, you need to create the EKS Cluster.
  • You can create your new EKS cluster via the AWS Cloud Console. However, we recommend using the Amazon EKS - eksctl as described in Getting started with Amazon EKS - eksctl. There also is a section in the documentation how to Delete cluster and nodes which is handy after finishing this tutorial. Make sure to create the EKS cluster with Managed nodes - Linux.
  • Once created, you should be connected to the newly created cluster. You can run kubectl get nodes to see the new nodes you just created.
  • You need to create a new Policy to allow Humanitec to access your cluster and run deployments. To do this, go to Identity and Access Management (IAM) within the AWS Cloud Console and then to Policies. Create a new policy with the following permissions.
1
{
2
"Version": "2012-10-17",
3
"Statement": [
4
{
5
"Sid": "VisualEditor0",
6
"Effect": "Allow",
7
"Action": [
8
"eks:DescribeNodegroup",
9
"eks:ListNodegroups",
10
"eks:AccessKubernetesApi",
11
"eks:DescribeCluster",
12
"eks:ListClusters"
13
],
14
"Resource": "*"
15
}
16
]
17
}
Copied!
  • You then need to create a new User and attach the policy created in the last step to that new user. In order to allow the newly created user to also access the private container registries which we will create later, also attach the policy AmazonEC2ContainerRegistryFullAccess. Make sure to also create an Access Key for that new user since Humanitec will need both later on, the access key ID as well as the secret access key. You can download both in a csv file. Create a JSON object from this file in the following format:
1
{
2
"aws_access_key_id": "AAABBBCCCDDDEEEFFFGGG",
3
"aws_secret_access_key": "zZxXyY123456789aAbBcCdD"
4
}
Copied!
  • You now need to map the IAM user into the newly created cluster. You can do so via kubectl edit configmap aws-auth -n kube-system. The following configuration needs to be added for the user (you can copy the userarn from the AWS Cloud Console; make sure to also exchange <username> with the name of the user you just created):
1
mapUsers: |
2
- userarn: arn:aws:iam::XXXXXXXXXXXX:user/<username>
3
username: <username>
4
groups:
5
- system:masters
Copied!
  • Install an NGINX Ingress Controller following the steps provided here.
  • Get the external IP of your NGINX load balancer by running kubectl get services -n ingress-nginx in the Cloud Shell and note it down.
Second, you need to create the Elastic Container Registry.
  • In your AWS Cloud Console, navigate to Elastic Container Registry (ECR).
  • Click on Get Started to create a new ECR.
  • You can leave the Visibility settings on Private. Enter sample-app as repository name and hit Create repository. Repeat the same process and create a 2nd repository called sample-service.
Tutorial work in progress; will be published soon. Check out this section to learn how to connect your own AKS Cluster.

Connect Kubernetes Cluster and Registry

After creating the Kubernetes cluster and the container registry, we now need to connect them to Humanitec to make them available for all users/developers. Here is how this is done:
GCP
AWS
Azure
  • Go to Resources management in Humanitec.
  • Select Kubernetes Cluster to connect the newly created Kubernetes cluster.
  • Give the Kubernetes cluster the ID my-k8s-cluster and select k8s-cluster-gke as Driver. You now need to fill in the details. These are:
    • The Cluster Name which you can find under Cluster basics on the cluster overview page for your newly created Kubernetes cluster within the Google Cloud Console (also note down the Control plane zone while you are on the page).
    • The external Load Balancer IP address which you noted down earlier.
    • The GCP Project ID which you can find on the Home Dashboard of the GCP project you created earlier.
    • The GCP Zone of your Kubernetes cluster which you noted down earlier.
    • The Provider's credentials as JSON object which is the content of the service account key JSON file you created earlier.
  • You don't need to connect the Google Container Registry to Humanitec. The registry is automatically connected to the Kubernetes cluster if you are using the same Google project (which we did when creating the infrastructure).
  • Assign the newly connected Kubernetes cluster to the Environment Type development within your organization. You can do so by selecting Manage Matching next to the newly created my-k8s-cluster.
  • Go to Resources management in Humanitec.
  • Select Kubernetes Cluster to connect the newly created Kubernetes cluster.
  • Give the Kubernetes cluster the ID my-k8s-cluster and select k8s-cluster-eks as Driver. You now need to fill in the details. These are:
    • The Provider's credentials as JSON object which is the content of the service account key JSON file you created earlier.
    • The external Load Balancer IP address which you noted down earlier.
    • The Load Balancer Hosted Zone which can be found in the AWS Console under the section EC2 -> Load Balancers.
    • The Cluster Name and the AWS Region of the cluster.
  • You don't need to connect the Container Registry (ECR) to Humanitec. The registry is automatically connected to the Kubernetes cluster if you are using the same Google project (which we did when creating the infrastructure).
  • Assign the newly connected Kubernetes cluster to the Environment Type development within your organization. You can do so by selecting Manage Matching next to the newly created my-k8s-cluster.
Tutorial work in progress; will be published soon. Check out this section to learn how to connect your own AKS Cluster.

Create Database

The sample-service required a PostgreSQL database to store data. We will now create this database. In general, there are 2 options: (a) you can create the database in the same infrastructure you are using for your Kubernetes cluster or (b) you can use a database-as-a-service provider like Aiven to create your database independently from the offerings of the large hyperscalers. We will explain both approaches in this section.
GCP
AWS
Azure
Aiven
  • Select SQL within the Google Cloud Console and click on CREATE INSTANCE.
  • Select Choose PostgreSQL and create a PostgreSQL instance.
  • Create a new user using the Built-in authentication and note down username as well as the password.
  • Note down the Connection name which you can find in SQL > Overview.
  • Select APIs & Services within the Google Cloud Console and enable the Cloud SQL Admin API.
  • Select RDS within the AWS Console and click on Create database.
  • Select PostgreSQL and Dev/Test. In order to keep things simple and easy to debug, select yes in Public access. Define a Master password for the Master username postgres. Make sure to note down the password. You can leave all other settings in the default setting.
  • You also might need to add an inbound rule in your EC2 -> Security Groups. This article provides more details. If in doubt, just check whether you can connect to the DB from your local machine.
Tutorial work in progress; will be published soon. Check out this section to learn how to connect your own Azure database.
Instead of using the managed database offerings of the hyperscaler of your choice, you can also use DB-as-a-service providers like Aiven to create and manage your databases. Here is how you create a PostgreSQL database in Aiven.
  • Log into the Aiven Console and select Create a new service.
  • Select PostgreSQL and the Cloud Provider and region that fits your Kubernetes cluster best. Hint: it is typically best to select the Service Plan called Hobbyist if you don't plan to use the database in a work setup later on.
  • Hit Create service once you are done.
  • Note down the following information:
    • Your Authentication token which you can find here:https://console.aiven.io/profile/auth.
    • Note down the Database Name, Host, Port, User, and Password from the overview page of your newly created database.

Connect Database

After creating the database, we now need to connect it to Humanitec to make it available for all users/developers. Here is how this is done:
GCP
AWS
Azure
Aiven
  • Go to Organization settings in Humanitec.
  • Select the tab Accounts and click on Google Cloud Platform. Enter the following information:
    • my-gcp-account as Account name.
    • The Service account access key (JSON)* which is the content of the service account key JSON file you created earlier.
  • Go to Resources management in Humanitec.
  • Click on Postgres to connect your newly created PostgreSQL database and enter the following information:
    • Define the ID as my-cloudsql-db.
    • Select postgres-cloudsql as driver.
    • Select the my-gcp-account as Credentials.
    • The Fully qualified CloudSQL instance name you noted down earlier.
    • The Password and User / Role you noted down earlier.
  • Click Create to create the new resource within Humanitec.
  • Assign the newly connected database to the Environment Type development within your organization. You can do so by selecting Manage Matching next to the newly created my-cloudsql-db.
  • Go to Organization settings in Humanitec.
  • Select the tab Accounts and click on Amazon Web Services. Enter the following information:
    • my-aws-account as Account name.
    • The Access key id and the Secret access key you downloaded earlier.
  • Go to Resources management in Humanitec.
  • Click on Postgres to connect your newly created PostgreSQL database and enter the following information:
    • Define the ID as my-cloudsql-db.
    • Select postgres as driver.
    • Select the my-aws-account as Credentials.
    • The Password and User (postgres if you have not changed the default during the creation process) you defined earlier.
    • The Host (which is called Endpoint in the AWS Console) and the Port (typically 5432).
  • Click Create to create the new resource within Humanitec.
  • Assign the newly connected database to the Environment Type development within your organization. You can do so by selecting Manage Matching next to the newly created my-cloudsql-db.
Tutorial work in progress; will be published soon. Check out this section to learn how to connect your own Azure database.
  • Go to Organization settings in Humanitec.
  • Select the tab Accounts and click on Aiven. Enter the following information:
    • my-aiven-account as Account name.
    • The Aiven Authentication token which you noted down earlier as Token.
  • Go to Resources management in Humanitec.
  • Click on Postgres to connect your newly created PostgreSQL database and enter the following information:
    • Define the ID as my-cloudsql-db.
    • Select postgres as driver.
    • Select the Configure without account as Credentials.
    • Enter the Hostname, Database name, Port, User, and Password you noted down earlier.
  • Click Create to create the new dynamic resource within Humanitec.
  • Assign the newly connected database to the Environment Type development within your organization. You can do so by selecting Manage Matching next to the newly created my-cloudsql-db.

Clone Repositories

In order to build and push the two required images for the sample application, you will need to fork the repositories for the sample-app and the sample-service. We will use Github and Github Actions in this tutorial but you should be able to use the Git and CI solution of your choice and follow the same general steps.
  • Fork the sample-app repository into your Github account.
  • Fork the sample-service repository into your Github account.

Create CI Pipelines

We will use Github Actions to build the images, to push them to the container registry, and to notify Humanitec. The first thing we need to do is to create Token within Humanitec to allow our CI pipeline to connect with our Humanitec Organization:
  • Go to the Organization settings in Humanitec by clicking on the account icon top right and select Organization settings from the drop-down.
  • Select the tab API tokens and enter pipeline-token as Token ID.
  • Click on Show next to the newly created API Token and note down the API token.
We are now ready to create CI pipelines for our two repositories:
GCP
AWS
Azure
Let's get started with the sample-app:
  • Create the following secrets within the scope of the repository (you can do so in Settings > Secrets):
    • PROJECT_ID: the ID of the newly created GCP project.
    • GCR_DEVOPS_SERVICE_ACCOUNT_KEY: the content of the service account key JSON file you created earlier.
    • HUMANITEC_TOKEN: the Humanitec API token you cerated earlier.
  • Go to your forked repository and switch to the Actions tab. Create a new workflow from scratch.
  • Add the following code snippet to build the Docker image, push it to the Google Cloud Registry, and inform Humanitec.
    1
    name: Sample CI Pipeline
    2
    3
    on:
    4
    push:
    5
    branches: [ master ]
    6
    pull_request:
    7
    branches: [ master ]
    8
    9
    jobs:
    10
    build-and-push:
    11
    name: Build and push image
    12
    runs-on: ubuntu-latest
    13
    14
    env:
    15
    IMAGE_NAME: my-sample-app
    16
    PROJECT_ID: ${{ secrets.PROJECT_ID }}
    17
    REGISTRY_URL: eu.gcr.io # You might need to change this
    18
    HUMANITEC_ORG: my-humanitec-org # Make sure to change this
    19
    20
    steps:
    21
    - name: Checkout
    22
    uses: actions/[email protected]
    23
    24
    # Build docker image
    25
    - name: Build image
    26
    run: docker build --tag $REGISTRY_URL/$PROJECT_ID/$IMAGE_NAME:$GITHUB_SHA .
    27
    28
    # Setup gcloud CLI
    29
    - uses: google-github-actions/[email protected]
    30
    with:
    31
    service_account_key: ${{ secrets.GCR_DEVOPS_SERVICE_ACCOUNT_KEY }}
    32
    project_id: ${{ secrets.PROJECT_ID }}
    33
    export_default_credentials: true
    34
    35
    # Configure docker to use the gcloud command-line tool as a credential helper
    36
    - run: gcloud auth configure-docker -q
    37
    38
    # Push image to Google Container Registry
    39
    - name: Push image
    40
    run: docker push $REGISTRY_URL/$PROJECT_ID/$IMAGE_NAME:$GITHUB_SHA
    41
    42
    # Inform Humanitec about new image
    43
    - name: Inform Humanitec
    44
    run: |-
    45
    curl \\
    46
    --request POST '<https://api.humanitec.io/orgs/'$HUMANITEC_ORG'/images/'$IMAGE_NAME'/builds>' \\
    47
    --header 'Authorization: Bearer ${{ secrets.HUMANITEC_TOKEN }}' \\
    48
    --header 'Content-Type: application/json' \\
    49
    --data-raw '{
    50
    "branch": "'$GITHUB_BRANCH'",
    51
    "commit": "'$GITHUB_SHA'",
    52
    "image": "'$REGISTRY_URL/$PROJECT_ID/$IMAGE_NAME:$GITHUB_SHA'",
    53
    "tags": ["'$GITHUB_SHA'"]
    54
    }'
    Copied!
  • Repeat the same steps for the sample-service. Make sure to change the IMAGE_NAME to my-sample-service in the Github Action.
Let's get started with the sample-app:
  • Create the following secrets within the scope of the repository (you can do so in Settings > Secrets):
    • REPO_NAME: the name of the Amazon ECR repo, i.e.,sample-app.
    • AWS_ACCESS_KEY_ID: the access key ID of the IAM user you created earlier.
    • AWS_SECRET_ACCESS_KEY: the secret access key of the IAM user you created earlier.
    • HUMANITEC_TOKEN: the Humanitec API token you created earlier.
  • Go to your forked repository and switch to the Actions tab. Create a new workflow from scratch.
  • Add the following code snippet to build the Docker image, push it to the Google Cloud Registry, and inform Humanitec.
    1
    name: Sample CI Pipeline
    2
    3
    on:
    4
    push:
    5
    branches: [ master ]
    6
    pull_request:
    7
    branches: [ master ]
    8
    9
    jobs:
    10
    build-and-push:
    11
    name: Build and push image
    12
    runs-on: ubuntu-latest
    13
    14
    env:
    15
    IMAGE_NAME: my-sample-app
    16
    HUMANITEC_ORG: my-humanitec-org # Make sure to change this
    17
    18
    steps:
    19
    - name: Checkout
    20
    uses: actions/[email protected]
    21
    22
    - name: Configure AWS Credentials
    23
    uses: aws-actions/[email protected]
    24
    with:
    25
    aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
    26
    aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
    27
    aws-region: us-west-2 # Make sure to change this if needed
    28
    29
    - name: Login to Amazon ECR
    30
    id: login-ecr
    31
    uses: aws-actions/[email protected]
    32
    33
    - name: Build, tag, and push the image to Amazon ECR
    34
    env:
    35
    ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
    36
    ECR_REPOSITORY: ${{ secrets.REPO_NAME }}
    37
    run: |
    38
    # Build a docker container and push it to ECR
    39
    docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$GITHUB_SHA .
    40
    docker push $ECR_REGISTRY/$ECR_REPOSITORY:$GITHUB_SHA
    41
    42
    # Inform Humanitec about new image
    43
    - name: Inform Humanitec
    44
    env:
    45
    ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
    46
    ECR_REPOSITORY: ${{ secrets.REPO_NAME }}
    47
    run: |-
    48
    curl \
    49
    --request POST 'https://api.humanitec.io/orgs/'$HUMANITEC_ORG'/images/'$IMAGE_NAME'/builds' \
    50
    --header 'Authorization: Bearer ${{ secrets.HUMANITEC_TOKEN }}' \
    51
    --header 'Content-Type: application/json' \
    52
    --data-raw '{
    53
    "branch": "'$GITHUB_BRANCH'",
    54
    "commit": "'$GITHUB_SHA'",
    55
    "image": "'$ECR_REGISTRY/$ECR_REPOSITORY:$GITHUB_SHA'",
    56
    "tags": ["'$GITHUB_SHA'"]
    57
    }'
    Copied!
  • Repeat the same steps for the sample-service. Make sure to change the IMAGE_NAME to my-sample-service in the Github Action.
Tutorial work in progress; will be published soon.

Next Steps

You now connected and configured all the CI pipelines and infrastructure needed to create the sample application within Humanitec. Just follow the Create Sample App tutorial to create and deploy the application. Make sure to select my-sample-app and my-sample-service as images. After the first deployment, check what happened in your Kubernetes cluster and in your database instance. There should be a new namespace with workloads in the Kubernetes cluster and a new database user and a new database within the database instance.
Also, make sure to remove the created infrastructure if you don't need it any longer to avoid unnecessary costs.
Last modified 1mo ago