Site icon Mr. 沙先生

How to Serverless Spark with AWS Fargate by AWS CDK

自從前陣子因為客戶而踏入 Amazon EMR on EKS 的坑之後,多花了一些時間在這部分上,Apache Hadoop 和 Kubernetes 都有自己龐大的生態系與專職的領域人才 (Data engineer & SRE),但 Amazon EMR on EKS 把這兩大怪獸合併在一起之後,某種程度上對於 Data engineer 或是 SRE 都是一項挑戰,跨領域這件事情仍然是一個備受考驗的議題。

對於 Data engineer 來說,光是管理一座 Hadoop Cluster 裡面的上百隻 Spark 可能就是一件夠煩的事情,如果加上一個 Kubernetes 相信很多 Data engineer 大概就昏頭了,如果和 SRE 配合得不好又會拖延整個進度,最好是找到一個比 Amazon EMR 更簡單使用、便宜且容易管理的方法。所以站在使用者的角度思考後如何提供一個「合理價格的 Serverless 方法」才是解決之道。

Serverless spark with AWS Fargate 是這篇的主題,其實我嘗試將 Amazon EMR on EKS 與 AWS Fargate 組合後來探討是否能夠符合 Data engineer 的期待,下面是預想的架構圖:

  1. 建立一組空的 Amazon EKS Cluster
  2. 建立兩個 AWS Fargate Profile:”eks-emr” 以及 “kube-system”
  3. 設定 Amazon EMR on EKS 集成參數,可以撰寫成 AWS CDK 方便日後維護
  4. Data engineer 只需要 submit pyspark 給 Amazon EMR on EKS 執行相關作業,AWS Fargate 依據 pyspark 需要的資源需求而長出對應的 Fargate instances。

步驟 1~3 只有在第一次 setup 時才需要執行,之後 Data engineer 可以專注在 pyspark 的工作上,加上 AWS Fargate 後大幅減少 Amazon EMR on EKS 管理上的疑慮,而 Amazon EMR on EKS + AWS Fargate 計價模式也能更貼近 per job 的算法,更精準的以每個 pyspark 作業計算成本。

Running Amazon EMR on EKS with AWS Fargate by AWS CDK

順手也把 Amazon EMR on EKS 的建立用 AWS CDK 寫了一遍,整個 repository 專案放在 shazi7804/cdk-samples 這裏給大家參考,大致上分為幾個部分:

AWS EKS Cluster 與 AWS Fargate Profile 建立

const mastersRole = new iam.Role(this, 'AdminRole', {
    assumedBy: new iam.AccountRootPrincipal()
});
const podExecutionRole = iam.Role.fromRoleArn(this, 'pod-execution-role', "arn:aws:iam::" + this.account + ":role/AWSFargatePodExecutionRole")

const cluster = new eks.Cluster(this, 'EksCluster', {
    clusterName: 'emr-spark-containers',
    vpc,
    vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_NAT }],
    mastersRole,
    defaultCapacity: 0,
    version: eks.KubernetesVersion.V1_21,
    endpointAccess: eks.EndpointAccess.PUBLIC_AND_PRIVATE,
});
cluster.addFargateProfile('fargate-profile', {
    selectors: [
        { namespace: "kube-system"},
        { namespace: "default"},
        { namespace: "emr-containers"}
    ],
    subnetSelection: { subnetType: ec2.SubnetType.PRIVATE_WITH_NAT },
    vpc,
    podExecutionRole
});

設定 IRSA 的權限給予 AWS EMR on EKS 的權限,以及 vpc-cni 權限

// Patch aws-node daemonset to use IRSA via EKS Addons, do before nodes are created
// https://aws.github.io/aws-eks-best-practices/security/docs/iam/#update-the-aws-node-daemonset-to-use-irsa
const awsNodeTrustPolicy = new cdk.CfnJson(this, 'aws-node-trust-policy', {
    value: {
      [`${cluster.openIdConnectProvider.openIdConnectProviderIssuer}:aud`]: 'sts.amazonaws.com',
      [`${cluster.openIdConnectProvider.openIdConnectProviderIssuer}:sub`]: 'system:serviceaccount:kube-system:aws-node',
    },
});
const awsNodePrincipal = new iam.OpenIdConnectPrincipal(cluster.openIdConnectProvider).withConditions({
    StringEquals: awsNodeTrustPolicy,
});
const awsNodeRole = new iam.Role(this, 'aws-node-role', {
    assumedBy: awsNodePrincipal
})

awsNodeRole.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKS_CNI_Policy'))

// Patch EMR job exection containers to use IRSA via EKS Addons.
const jobExecutionTrustPolicy = new cdk.CfnJson(this, 'job-execution-trust-policy', {
    value: {
      [`${cluster.openIdConnectProvider.openIdConnectProviderIssuer}:aud`]: 'sts.amazonaws.com',
      [`${cluster.openIdConnectProvider.openIdConnectProviderIssuer}:sub`]: 'system:serviceaccount:emr-containers:emr-on-eks-job-execution-role',
    },
});
const jobExecutionPrincipal = new iam.OpenIdConnectPrincipal(cluster.openIdConnectProvider).withConditions({
    StringLike: jobExecutionTrustPolicy,
});

const jobExecutionPolicy = new iam.PolicyStatement();
jobExecutionPolicy.addAllResources()
jobExecutionPolicy.addActions(
    "logs:PutLogEvents",
    "logs:CreateLogStream",
    "logs:DescribeLogGroups",
    "logs:DescribeLogStreams",
    "s3:PutObject",
    "s3:GetObject",
    "s3:ListBucket"
);

const jobExecutionRole = new iam.Role(this, 'execution-role', {
    roleName: 'AmazonEMRContainersJobExecutionRole',
    assumedBy: jobExecutionPrincipal
});

jobExecutionRole.addToPolicy(jobExecutionPolicy);
jobExecutionRole.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('service-role/AmazonElasticMapReduceRole'));

// Patch EMR job driver and creator containers to use IRSA.
const jobDriverTrustPolicy = new cdk.CfnJson(this, 'job-driver-trust-policy', {
    value: {[`${cluster.openIdConnectProvider.openIdConnectProviderIssuer}:sub`]: 'system:serviceaccount:emr-containers:emr-containers-sa-*-*-' + this.account + '-*'},
});
const jobDriverPrincipal = new iam.OpenIdConnectPrincipal(cluster.openIdConnectProvider).withConditions({
    StringEquals: jobDriverTrustPolicy,
});

// Updated trust policy of job driver conntainer to use IRSA 
jobExecutionRole.assumeRolePolicy?.addStatements(
    new iam.PolicyStatement({
            effect: iam.Effect.ALLOW,
            principals: [jobDriverPrincipal],
            actions: ['sts:AssumeRoleWithWebIdentity']
    }),
)

設定 AWS EKS add-on 管理

// Addons
new eks.CfnAddon(this, 'vpc-cni', {
    addonName: 'vpc-cni',
    resolveConflicts: 'OVERWRITE',
    clusterName: cluster.clusterName,
    addonVersion: "v1.9.1-eksbuild.1",
    serviceAccountRoleArn: awsNodeRole.roleArn
});
new eks.CfnAddon(this, 'kube-proxy', {
    addonName: 'kube-proxy',
    resolveConflicts: 'OVERWRITE',
    clusterName: cluster.clusterName,
    addonVersion: "v1.21.2-eksbuild.2",
});
new eks.CfnAddon(this, 'core-dns', {
    addonName: 'coredns',
    resolveConflicts: 'OVERWRITE',
    clusterName: cluster.clusterName,
    addonVersion: "v1.8.4-eksbuild.1",
});

Kuberenetes Manifests 我選擇用 yaml 的方式引入到 AWS CDK,這邊用到了 js-yamlfs 這兩個 Typescrip 套件庫

// Manifests
const manifestEmrContainers = yaml.loadAll(fs.readFileSync('manifests/emrContainersDeploy.yaml', 'utf-8')) as Record<string, any>[];
const manifestEmrContainersDeploy = new eks.KubernetesManifest(this, 'emr-containers-deploy', {
  cluster,
  manifest: manifestEmrContainers,
  prune: false
});

Manifests 裡面包括 AWS EMR on EKS 所需要的 Namespace, Role, RoleBinding 和 ServiceAccount

apiVersion: v1
kind: Namespace
metadata:
  name: emr-containers
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: emr-on-eks-job-execution-role
  namespace: emr-containers
  annotations:
    eks.amazonaws.com/role-arn: 'arn:aws:iam::${aws-account-id}:role/AmazonEMRContainersJobExecutionRole'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: emr-containers
  namespace: emr-containers
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - serviceaccounts
  - services
  - configmaps
  - events
  - pods
  - pods/log
  verbs:
  - get
  - list
  - watch
  - describe
  - create
  - edit
  - delete
  - deletecollection
  - annotate
  - patch
  - label
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - create
  - patch
  - delete
  - watch
- apiGroups:
  - apps
  resources:
  - statefulsets
  - deployments
  verbs:
  - get
  - list
  - watch
  - describe
  - create
  - edit
  - delete
  - annotate
  - patch
  - label
- apiGroups:
  - batch
  resources:
  - jobs
  verbs:
  - get
  - list
  - watch
  - describe
  - create
  - edit
  - delete
  - annotate
  - patch
  - label
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
  - describe
  - create
  - edit
  - delete
  - annotate
  - patch
  - label
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - roles
  - rolebindings
  verbs:
  - get
  - list
  - watch
  - describe
  - create
  - edit
  - delete
  - deletecollection
  - annotate
  - patch
  - label
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: emr-containers
  namespace: emr-containers
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: emr-containers
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: emr-containers

EKS ConfigMap aws-auth 我選擇用 AWS CDK 的 eks.AwsAuth 去 append 裡面的值,而不是用 Kubernetes manifests 覆蓋上去

// IAM integrate EMR
const emrContainerServiceRole = iam.Role.fromRoleArn(this, 'ServiceRoleForAmazonEMRContainers',
    "arn:aws:iam::" + this.account + ":role/AWSServiceRoleForAmazonEMRContainers"
);

const awsAuth = new eks.AwsAuth(this, 'aws-auth', {cluster})
awsAuth.addRoleMapping(mastersRole, {
    username: 'masterRole',
    groups: ['system:masters']
});
awsAuth.addRoleMapping(podExecutionRole, {
    username: 'system:node:{{SessionName}}',
    groups: [
        'system:bootstrappers',
        'system:nodes',
        'system:node-proxier'
    ]
});
awsAuth.addRoleMapping(emrContainerServiceRole, {
    username: 'emr-containers',
    groups: []
});

最後將 AWS EMR on EKS 的 Virtual Cluster 對應到 AWS EKS Namespace 上

const virtualCluster = new emrc.CfnVirtualCluster(this, 'EmrContainerCluster', {
    name: props.virtual_cluster_name,
    containerProvider: {
        id: cluster.clusterName,
        type: "EKS",
        info: {
            eksInfo: { namespace: "emr-containers"}
        }
    }
});

virtualCluster.node.addDependency(cluster);
virtualCluster.node.addDependency(manifestEmrContainersDeploy);
virtualCluster.node.addDependency(awsAuth);

由於 EMR Virtual Cluster 在建立前必須先準備好 AWS EKS 和 IRSA 所有的作業,所以 node.addDependency 這個 AWS CDK 函數能幫助我們處理優先順序。

更詳細的程式碼範例可以到 shazi7804/cdk-samples 這邊查看,如果有改善或遇到問題也歡迎大家提 PR & Issue。

Exit mobile version