![]() ![]() ![]() ![]() V1PodAffinity ( required_during_scheduling_ignored_during_execution = ), topology_key = ". V1NodeAffinity ( preferred_during_scheduling_ignored_during_execution = ) ] ), ) ] ), pod_affinity = k8s. When a DAG submits a task, the KubernetesExecutor requests a worker pod from the Kubernetes API. KubernetesExecutor requires a non-sqlite database in the backend. The scheduler itself does not necessarily need to be running on Kubernetes, but does need access to a Kubernetes cluster. V1Container ( name = "init-container", image = "ubuntu:16.04", env = init_environments, volume_mounts = init_container_volume_mounts, command =, args =, ) affinity = k8s. KubernetesExecutor runs as a process in the Airflow Scheduler. V1ContainerPort ( name = "http", container_port = 80 ) init_container_volume_mounts = init_environments = init_container = k8s. Basics of Deploying Airflow inside Kubernetes. Before the Kubernetes Executors, all previous Airflow solutions like Celery Executor or Sequential Executor required static workers. With Kubernetes Executors, the workers are dynamic resource allocation. But what you may not know is how you can actually deploy something like Airflow inside Kubernetes. Airflow architecture details (photo by me) Kubernetes Executors. I’m going to assume you know something about Kubernetes. V1PersistentVolumeClaimVolumeSource ( claim_name = "test-volume" ), ) port = k8s. Simply put, an Airflow installation usually consists of a Scheduler, Airflow Workers, Web Server/UI, and optionally a Database. V1Volume ( name = "test-volume", persistent_volume_claim = k8s. V1VolumeMount ( name = "test-volume", mount_path = "/root/mount_file", sub_path = None, read_only = True ) configmaps = volume = k8s. Secret_file = Secret ( "volume", "/etc/sql_conn", "airflow-secrets", "sql_alchemy_conn" ) secret_env = Secret ( "env", "SQL_CONN", "airflow-secrets", "sql_alchemy_conn" ) secret_all_keys = Secret ( "env", None, "airflow-secrets-2" ) volume_mount = k8s. ![]()
0 Comments
Leave a Reply. |