An application, cluster, or repository can be created In ArgoCD from its WebUI, CLI, or by writing a Kubernetes manifest that then can be passed to
kubectl to create resources.
For example, Applications are Kubernetes CustomResources and described in Kubernetes CRD
$ kubectl get crd applications.argoproj.io
NAME CREATED AT
And are accessible in ArgoCD’s namespaces as common Kubernetes resources:
$ kubectl -n dev-1–18-devops-argocd-ns get applications
NAME SYNC STATUS HEALTH STATUS
backend-app OutOfSync Missing
dev-1–18-web-payment-service-ns Synced Healthy
web-fe-github-actions Synced Healthy
In the previous post ArgoCD: users, access, and RBAC we’ve checked how to manage users and their permissions in ArgoCD, now let’s add an SSO authentification.
The idea is that we don’t add user accounts locally in the ArgoCD’s ConfigMap, but instead will use our Okta users databases and Okta will perform their authentication. And on the ArgoCD side will do users’ authorization, i.e. will check their permission boundaries.
Also, by using the SSO we will be able to create user groups that will have various roles tied to specific Projects.
We will use the SAML (with Dex), see also…
ArgoCD has two types of users — local, that are set in the
argocd-cm ConfigMap, and SSO.
Below, we will speak about local user management, and in the next chapter will see how to integrate ArgoCD and Okta, because local users can’t be grouped in groups. See the documentation on the Local users/accounts page.
For any users, their permissions can be configured with roles, that have policies attached describing objects to allow access to and operations that users can perform on.
With this, access can be configured globally per cluster or dedicated to Projects.
Let’s start with adding a simple…
Let’s proceed with our journey with Istio.
Besides Istio, in this post, we will also configure ExternalDNS, see the Kubernetes: update AWS Route53 DNS from an Ingress for details.
Everything described below is a kind of Proof of Concept and will be deployed to the same AWS Elastic Kubernetes Service Dev cluster.
In the previous post, Istio: an overview and running Service Mesh in Kubernetes, we started Istion io AWS Elastic Kubernetes Service and got an overview of its main components.
The next task is to add an AWS Application Load Balancer (ALB) before Istio Ingress Gateway because Istio Gateway Service with its default type LoadBalancer creates nad AWS Classic LoadBalancer where we can attach only one SSL certificate from Amazon Certificate Manager.
Currently, I’m actively working on our AWS infrastructure costs optimization and will post a series of posts about this.
The first one will be about AWS RDS Reserved Instances. The idea is quite simple: you are committing to use some types of instances for one or three years. Here, you’ll be able to choose to pay the commitment immediately for the whole term, and a discount will be bigger, or partially, or pay month by month as usual. In any case, AWS will give you some discount for the service. …
Istio is a Service Mesh solution that allows performing Service Discovery, Load Balancing, traffic control, canary rollouts and blue-green deployments, traffic monitoring between microservices.
We will use Istio in our AWS Elastic Kubernetes Service for traffic monitoring, as an API Gateway service, for traffic policies, and for various deployment strategies.
In this post, will speak about the Service Mesh concept in general, then will take an overview of Istio architecture and components, its installation process, and how to run a test application.
Let’s configure Opsgenie with AWS RDS.
The idea is to get notifications from RDS about events and send them to Opsgenie which will send them to our Slack.
The official documentation is here>>>.
Go to the Integrations list, find AWS RDS, activate it:
Usually, we don’t see Endpoints objects when using Kubernetes Services, as they are working under the hood, similarly to ReplicaSets which are “hidden” behind Kubernetes Deployments.
So, Service is a Kubernetes abstraction that uses labels to chose pods to route traffic to, see the Kubernetes: ClusterIP vs NodePort vs LoadBalancer, Services, and Ingress — an overview with examples and Kubernetes: Service, load balancing, kube-proxy, and iptables:
- protocol: TCP
As soon as a new pod appears in a cluster, with labels matching with Service’s…