Elastic Kubernetes Service (EKS) Support

You can deploy the Akana Kubernetes Adapter on an AWS EKS cluster. The Akana Adapter uses configmaps to ensure any changes to the Akana Pod configuration are updated across all the Pods. It ensures that the Akana Pods running are in sync with the Pods defined in the database.

Table of Contents

Overview of Akana Kubernetes Adapter

The Akana Kubernetes Adapter (hereafter referred to as the "Adapter") facilitates the updating of Akana container properties and ensures all Pod configurations and the deployed Pods are running in sync with the Pods defined in the database. The Adapter is deployed as a standalone Pod within the same cluster and namespace as the Akana Pods and is configured through a set of configmaps.

The Adapter invokes the Admin Console REST APIs on the Pods to update the properties in the Akana container and maintain synchronization between the Pods defined in the database and the Pods running in the EKS cluster.

The Adapter offers the ability to configure Dynamic Logging and/or Pod Sync, which are set in the akana-configmap.

  • Dynamic Logging: The changes made to the container properties in the akana-configmap apply to all Pods that share the same Pod prefix, such as akana-pm, akana-gateway. When a change is made to the configmap, which stores Admin Console properties such as a logging property and the corresponding log level settings (error/trace), that property change is automatically applied to the corresponding Pods with the defined prefix.
  • Pod Sync: The Adapter checks the containers registered in the database and compares them to the Pods running in the cluster. If a container exists in the database but not in the cluster, the Adapter will remove the container from the database. This eliminates the need to include the removecontainer.sh in the pre-stop lifecycle hook for any container deployments or manually remove the container from the UI. The ability to enable or disable this functionality is configurable in the akana-configmap. You can schedule the Pod Sync and Adapter updates to run at specific intervals.

Prerequisite

An EKS cluster with Akana Pods running in the cluster.

Deploy the Akana Kubernetes Adapter

Follow these steps to install, configure, and deploy the Akana Kubernetes Adapter.

Install and configure the Adapter

In this section:

Step 1: Pull an image from the Docker Hub

Download the Docker image (a repository) from the Docker Hub registry.

docker image pull <imagename:tag>

For more information, see Docker image pull.

Step 2: Create an Adapter

To deploy the Adapter, you need to create an akanaadapter.yaml file that contains the necessary configuration details. Update the YAML file with the Adapter image and namespace.

Use the following YAML file structure for the Adapter deployment. This YAML file defines a deployment named "kubernetes-adapter" with a single replica in the default namespace and specifies a container image to be used.

Copy
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
   deployment.kubernetes.io/revision: "1"
  generation: 1
  labels:
   app: kubernetes-adapter
  name: kubernetes-adapter
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
   matchLabels:
    app: kubernetes-adapter
  strategy:
   type: Recreate
  template:
   metadata:
    creationTimestamp: null
    labels:
     app: kubernetes-adapter
   spec:
     containers:
     - args:
       - akana-configmap
       - default
       env:
       - name: _JAVA_OPTIONS
         value: -Djava.util.logging.config.file='/app/volume/logging.properties'
       image: 
       imagePullPolicy: Always
       name: kubernetes-adapter
       resources: {}
       terminationMessagePath: /dev/termination-log
       terminationMessagePolicy: File
       volumeMounts:
       - mountPath: /app/volume
       name: logging-file
   dnsPolicy: ClusterFirst
   imagePullSecrets:
   - name: registry-credentials
   restartPolicy: Always
   schedulerName: default-scheduler
   securityContext: {}
   terminationGracePeriodSeconds: 30
   volumes:
   - configMap:
      defaultMode: 420
      name: adapter-configmap
      name: logging-file

Step 3: Create a role

To define a role within the Adapter namespace, the following YAML file structure, createRole.yaml, creates a role that provides a list, get, and watch permissions for the Akana Pods and Deployments.

Copy
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: adapter-role
  namespace: default
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["pods/status"]
  verbs: ["get"]
- apiGroups: [""]
  resources: ["configmaps", "secrets"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["services",]
  verbs: ["get", "list"]

Step 4: Bind the role to a service account

To assign a role to the service account, create a RoleBinding resource using the YAML file structure in bindRole.yaml. This RoleBinding resource will bind the role to the Adapter Pod.

Copy
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: adapter-role-binding
  namespace: default
subjects:
- kind: ServiceAccount
  name: default
  namespace: default
roleRef:
  kind: Role
  name: adapter-role
  apiGroup: rbac.authorization.k8s.io

Step 5: Create a secret

The next step is to create a secret for the Admin Console and the Policy Manager.

Step 5a: Create a secret for the Admin Console

For each Pod type, generate an EKS secret containing the username and password required to access the Admin Console. Modify the akana.credentials.username and akana.credentials.password properties with the credentials to access the Admin Console. These values are base64 encoded.

Copy
apiVersion: v1
items:
- apiVersion: v1
  data:
   akana.credentials.password: cGFzc3dvcmQ=
   akana.credentials.username: YWRtaW5pc3RyYXRvcg==
  kind: Secret
  metadata:
   annotations:
   labels:
    akana-gateway-adapter: com.soa
   name: akana-secret
   namespace: default
  type: Opaque
kind: List
metadata:
  resourceVersion: ""

Step 5b: Create a secret for the Policy Manager

If the POD sync functionality is enabled, a secret is required to send an API request to the Policy Manager to remove orphaned Pods from the database. Update the akana.credentials.password and akana.credentials.username with the credentials of an administrator who has the necessary permissions to log into the Policy Manager and delete containers.

Copy
apiVersion: v1
items:
- apiVersion: v1
  data:
    akana.credentials.password: cGFzc3dvcmQ=
    akana.credentials.username: YWRtaW5pc3RyYXRvcg==
  kind: Secret
  metadata:
    annotations:
    labels:
      akana-pm-secret: com.soa
    name: pm-secret
    namespace: default
  type: Opaque
kind: List
metadata:
  resourceVersion: ""

Step 6: Configure the adapter-configmap and akana-configmap

The adapter-configmap.yaml file configures the log level for the Adapter log file. The log levels are INFO, the default setting, and FINE, which is used for debugging issues with the Adapter. If the log level is changed in this configmap, the akanaadapter.yaml must be redeployed for the changes to take effect.

The Adapter logs are written to stdout and can be viewed using the kubectl logs <adapter_pod> command. For example:

kubectl logs kubernetes-adapter-645d56c858-sdb5g

Update the adapter-configmap.yaml file to configure the logging properties for the Adapter.

Copy
apiVersion: v1
data:
  logging.properties: |-
   # Limit the message that are printed on the console to INFO and above.
   # modify the following level to FINE to get more logs for kubernetes adapter
   handlers= java.util.logging.ConsoleHandler
   .level= FINE
   java.util.logging.ConsoleHandler.level = FINE
   java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
   # For example, set the com.akana.k8s.adapter logger to only log FINE
   # messages:
   com.akana.k8s.adapter.level = FINE
   # EOF
kind: ConfigMap
metadata:
  annotations:
  labels:
  name: adapter-configmap
  namespace: default

The akana-configmap.yaml file structure defines the Akana Pod types (for example, akana-pm, akana-pmcm, akana-gateway), Pod details, protocols used to connect to the PODs, Pod cleanup settings, property sync enablement, and the polling interval.

Copy
apiVersion: v1
data:
namespace: default
akana-orphaned-container-cleanup-interval: "20000"
akana-pod-sync-enable: "false"
akana-configmap-sync-interval: "600000"
akana-configmap-sync-enable: "true"
akana-pm-service-name: akana-pm
akana-pm-service-port-name: http
akana-pm-service-protocol: HTTP
akana-gateway-akana-container-access-protocol: HTTP
akana-gateway-akana-container-name: akana-gateway
akana-gateway-akana-container-port-name: http
akana-gateway-akana-container-port-protocol: TCP
akana-gateway-configmap-label-name: app
akana-gateway-configmap-label-value: akana-gateway-adapter
akana-gateway-pod-label-name: app
akana-gateway-pod-label-value: akana-gateway
akana-gateway-secret-label-name: akana-gateway-adapter
akana-gateway-secret-label-value: com.soa
akana-pm-akana-container-access-protocol: HTTP
akana-pm-akana-container-name: akana-pm
akana-pm-akana-container-port-name: http
akana-pm-akana-container-port-protocol: TCP
akana-pm-configmap-label-name: app
akana-pm-configmap-label-value: akana-pm-adapter
akana-pm-pod-label-name: app
akana-pm-pod-label-value: akana-pm
akana-pm-secret-label-name: akana-gateway-adapter
akana-pm-secret-label-value: com.soa
prefixes: akana-gateway,akana-pm,akana-pmcm
kind: ConfigMap
metadata:
annotations:
labels:
app: akana-adapter
name: akana-configmap
namespace: default

Step 7: Deploy the Adapter Pod

Run the YAML file by using the following Kubectl command to deploy the Adapter Pod.

kubectl -n <NAMESPACE> apply -f <RESOURCE_NAME>.yaml

[Optional] Step 8: Update a property

You can use a ConfigMap to update a container property. When you modify a logging property in the ConfigMap, the corresponding property in the Pod will be updated.

For example, to configure the gateway logging, you can create a YAML file "akana-gateway-adapter-configmap.yaml" that specifies the logging properties, as shown in the following code block.

Copy
apiVersion: v1
items:
- apiVersion: v1
  data:
   com.soa.log_logger.soa.level: error
   com.soa.log_logger.httprequest.level: error
   com.soa.log_logger.transport.level: error
   com.soa.log_logger.soa.level: error
   com.soa.log_logger.akana.level: error
   com.soa.log_logger.wssp.level: error
   com.soa.log_logger.soa.level: error
   com.soa.log_logger.container.level: error
   com.soa.log_logger.httpclient.level: error
   com.soa.log_logger.wsdl.level: error
   com.soa.log_logger.wire.level: error
   com.soa.log_logger.http.level: error
   com.soa.log_logger.jose.level: error
   com.soa.platform.jetty_jetty.information.servlet.enable: "false"
  kind: ConfigMap
  metadata:
   labels:
    app: akana-gateway-adapter
   name: akana-gateway-adapter-configmap
   namespace: default
kind: List

To update the logging properties, change the log level from "error" to "warn" in the YAML file. Alternatively, you can use the following syntax:

<property pid>_<property_name>:<property value>

To verify the logging properties, log in to the Admin Console and go to the Configuration tab > Configuration Categories > com.soa.log PID.

Enable dynamic property updates and Pod synchronization

The Adapter is deployed within the same cluster and namespace as the other Akana Pods and is used to configure the dynamic property updates and pod synchronization using ConfigMap. If there are multiple replicas of the Akana Pods, and when a property in the ConfigMap is modified, then the first Akana Pod in the replica whose ConfigMap synchronization is triggered will update the modified property. Follow these steps:

  1. Use the following metadata ConfigMap Yaml file structure to store data in key-value pairs.
    Copy
    apiVersion: v1
    data:
      akana-configmap-sync-enable: "true"
      akana-configmap-sync-interval: "300000"
      akana-orphaned-container-cleanup-interval: "120000"
      akana-pm-secret-label-name: test-secret-label-1
      akana-pm-secret-label-value: test-secret-label-value-1
      akana-pm-service-name: akana-pm-service
      akana-pm-service-port-name: HTTP
      akana-pm-service-protocol: HTTP
      akana-pod-sync-enable: "true"
      namespace: test
      pod1-akana-container-access-protocol: HTTP
      pod1-akana-container-name: akana-pm
      pod1-akana-container-port-name: HTTP
      pod1-akana-container-port-protocol: TCP
      pod1-configmap-label-name: config-map-label1
      pod1-configmap-label-value: config-map-value1
      pod1-pod-label-name: test-pod-label-1
      pod1-pod-label-value: test-pod-value-1
      pod1-secret-label-name: test-secret-label-1
      pod1-secret-label-value: test-secret-label-value-1
      prefixes: pod1
    kind: ConfigMap
    metadata:
      creationTimestamp: "2024-07-01T04:56:37Z"
      name: akana-configmap
      namespace: test
      resourceVersion: "305255"
      uid: b7100b5c-c722-4927-a4d6-05eae06b35f0
  1. Use the following Pod specific ConfigMap structure that refers to a ConfigMap and configures the container(s) in that Pod based on the data in the ConfigMap. The Pod and the ConfigMap must be in the same namespace.
    Copy
    apiVersion: v1
    data:
      com.soa.log_logger.soa.level: error
      com.soa.log_status: error
      com.soa.platform.jetty_jetty.information.servlet.enable: "false"
    kind: ConfigMap
    metadata:
      creationTimestamp: "2024-04-16T06:15:26Z"
      labels:
        config-map-label1: config-map-value1
      name: test-configmap-1
      namespace: test
      resourceVersion: "250329"
      uid: 4846756a-c022-42f3-87cc-66fcdcceb952
  1. Use the following Secret structure to store data such as a password, a token, or a key.
    Copy
    apiVersion: v1
    data:
      akana.credentials.password: cGFzc3dvcmQ=
      akana.credentials.username: YWRtaW5pc3RyYXRvcg==
    kind: Secret
    metadata:
      creationTimestamp: "2024-04-17T04:45:01Z"
      labels:
        test-secret-label-1: test-secret-label-value-1
      name: test-secret-1
      namespace: test
      resourceVersion: "142190"
      uid: 8f6bd1a0-06b1-47b3-a9d1-b4cf02482973
    type: Opaque
  1. Use the following Service structure to expose a network application that is running as one or more Pods in your cluster.
    Copy
    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"akana-pm-service","namespace":"test"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":8080}],"selector":{"app":"akana-pm-service-app"}}}
      creationTimestamp: "2024-06-13T08:16:49Z"
      name: akana-pm-service
      namespace: test
      resourceVersion: "237375"
      uid: ee6ac16b-7f0e-47e6-80bc-2f85d595d0c9
    spec:
      clusterIP: 10.106.219.111
      clusterIPs:
      - 10.106.219.111
      internalTrafficPolicy: Cluster
      ipFamilies:
      - IPv4
      ipFamilyPolicy: SingleStack
      ports:
      - name: http
        port: 80
        protocol: TCP
        targetPort: 8080
      selector:
        app: akana-pm-service-app
      sessionAffinity: None
      type: ClusterIP
    status:
      loadBalancer: {}
  1. Use the following Pod structure to manage a single or group of one or more containers.
    Copy
    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test-pod-label-1":"test-pod-value-1"},"name":"eapnd","namespace":"test"},"spec":{"containers":[{"image":"nginx:1.14.2","name":"akana-pm","ports":[{"containerPort":80,"name":"http","protocol":"TCP"}]}]}}
      creationTimestamp: "2024-06-13T09:28:03Z"
      labels:
        test-pod-label-1: test-pod-value-1
      name: eapnd
      namespace: test
      resourceVersion: "261561"
      uid: 3ea75f37-1afb-4f0c-82bd-760e186bcd5d
    spec:
      containers:
      - image: akana-pm-image
        imagePullPolicy: IfNotPresent
        name: akana-pm
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
          name: kube-api-access-xwbd7
          readOnly: true
      dnsPolicy: ClusterFirst
      enableServiceLinks: true
      nodeName: minikube
      preemptionPolicy: PreemptLowerPriority
      priority: 0
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: default
      serviceAccountName: default
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoExecute
        key: node.kubernetes.io/not-ready
        operator: Exists
        tolerationSeconds: 300
      - effect: NoExecute
        key: node.kubernetes.io/unreachable
        operator: Exists
        tolerationSeconds: 300
      volumes:
      - name: kube-api-access-xwbd7
        projected:
          defaultMode: 420
          sources:
          - serviceAccountToken:
              expirationSeconds: 3607
              path: token
          - configMap:
              items:
              - key: ca.crt
                path: ca.crt
              name: kube-root-ca.crt
          - downwardAPI:
              items:
              - fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
                path: namespace
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2024-06-13T09:28:03Z"
        status: "True"
        type: Initialized
      - lastProbeTime: null
        lastTransitionTime: "2024-06-21T04:34:09Z"
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2024-06-21T04:34:09Z"
        status: "True"
        type: ContainersReady
      - lastProbeTime: null
        lastTransitionTime: "2024-06-13T09:28:03Z"
        status: "True"
        type: PodScheduled
    containerStatuses:
    - containerID: docker://836855af79fb9a77b38bf3fa86bda3351fb92e2e432ab85f3e33014c5903af85
      image: nginx:1.14.2
      imageID: docker-pullable://nginx@sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d
      lastState:
        terminated:
          containerID: docker://4c4e5da06bbac1591f9f290f0998c6ecada19fa0ce9ef334647cf258b377e29b
          exitCode: 0
          finishedAt: "2024-06-18T10:14:08Z"
          reason: Completed
          startedAt: "2024-06-18T06:32:38Z"
      name: akana-pm
      ready: true
      restartCount: 2
      started: true
      state:
        running:
          startedAt: "2024-06-21T04:34:08Z"
      hostIP: 192.168.49.2
      phase: Running
      podIP: 10.244.0.143
      podIPs:
      - ip: 10.244.0.143
      qosClass: BestEffort
      startTime: "2024-06-13T09:28:03Z"