Skip to main content

Deploying on aws

  1. Install aws cli tool:
    • If you have already aws cli installed and credentials configured, skip to step 2.
    • Otherwise, run make awscli-install.
  2. Install kops:
    • If you have already kops 1.18+ installed, skip to step 3.
    • Otherwise, run make kops-install.
  3. Edit the file .env and rename the KOPS_CLUSTER_NAME and KOPS_STATE_STORE to your name of choice.
# ======================== KOPS ========================
KOPS_CLUSTER_NAME=example.k8s.local
KOPS_STATE_STORE=s3://kops-example-com-state-store
  1. Edit the file .env again and choose the size of the cluster, the nodes count, the node size, region, etc:
AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
AWS_REGION = us-east-1
AWS_NODE_COUNT = 1
AWS_NODE_SIZE = t2.medium
AWS_MASTER_SIZE = t2.small
AWS_EFS_TOKEN = example-efs
  1. Create the cluster: make kops-cluster.
  2. The next step consists of building the docker containers used in the project:
  • Install docker: make docker-install.
  • You need a repository to store those images:
    • If you're deploying on the cloud, you will be probably using docker hub or a similar service.
    • Create an account and put your credentials in the .env file like this:
    # Docker hub
    DOCKER_HUB_USR = your_docker_hub_username
    DOCKER_HUB_PWD = your_docker_hub_password
    • Build and release:
      • backend: !todo!
      • frontend: !todo!
      • make consumer-release
      • make multiav-release
      • make multiav-release-go
  1. Install Helm: make helm-install.
  2. Init cert manager: make k8s-init-cert-manager and couchbase crds: make k8s-install-couchbase-crds.
  3. Edit the deployments/saferwall/values.yaml
    • Set efs-provisioner.enabled to true.
    • Set couchbase-operator.cluster.volumeClaimTemplates.spec.storageClassName to default.
    • If you are interested to see the logs in EFK:
      • Set elasticsearch.enabled to true.
      • Set kibana.enabled to true.
      • Set filebeat.enabled to true.
    • Set prometheus-operator.enabled to true if you want to get metrics and:
      • kops edit cluster:
        kubelet:
        anonymousAuth: false
        authenticationTokenWebhook: true
        authorizationMode: Webhook
  4. Install helm chart: make helm-release.

Tips for deploying a production cluster

  • To have a HA cluster we need at least more than one master and several workers, in different availability zones.
  • With multiple master nodes, you will be able both to do graceful (zero-downtime) upgrades and you will be able to survive AZ failures.
  • Harden Kubernetes API to be accessible only from alllowed IPs via firewall rules or change the kops cluster config with the kubernetesApiAccess.

Billing Reduction Tips

  • Opt for EC2 Spot Instances and reserved instances.
  • Support kubernetes cluster with on-demand instances, which can take up the slack in the event of any interruptions to spot instances. This will improve availability and reliability.
  • Size of the master node:
    • 1-5 nodes: m3.medium
    • 6-10 nodes: m3.large
    • 11-100 nodes: m3.xlarge
    • 101-250 nodes: m3.2xlarge
    • 251-500 nodes: c4.4xlarge
    • more than 500 nodes: c4.8xlarge

CPU/MEM usage per service

  • The memory and cpu allocation are observed at peak time, it represents the maximum memory/cpu that the engine uses during a file scan. We scanned multiple file format as they could trigger different component of the engine to realistically estime these numbers.
  • Some engines like ClamAV have a daemonized version, those are generally faster because the rules are loaded only once.
ServiceCPU UtilMem UtilPerformance
AV Avast1 core1260MBFast
AV Avira1 core200MBSlow
AV Bitdefender1 core600MBSlow
AV ClamAV1 core1700MBFast
AV COMODO1 core300MBMedium
AV DrWeb1.2 core580MBFast
AV ESET1 core220MBMedium
AV FSecure1 core420MBFast
AV McAfee1 core400MBMedium
AV Sophos1 core300MBMedium
AV Symantec0.4 core300MBMedium
AV TrendMicrocoreMBMedium
AV WindefendercoreMBMedium

Tune Kops max pods

  • Kops set the max pods to 150 which is the default value recommanded by Kubernetes.
  • However if you're spinning large computer instances, you might hit the max value while you still have enough computer power, if can edit this value by: kops edit cluster :
spec:
kubelet:
maxPods: 200

Autoscaling

  • Create an instance group for spot instances: kops create -f build/k8s/spot-ig.yaml
  • Attach required policies to the cluster: kops edit cluster
    kind: Cluster
...
spec:
additionalPolicies:
node: |
[
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"autoscaling:DescribeTags"
],
"Resource": "*"
}
]
...
  • Update the cluster to review the changes : kops update cluster.
  • Add --yes to apply the changes: kops update cluster --yes.
  • Changes may require instances to restart: kops rolling-update cluster.