HomeBig DataConfigure Hadoop YARN CapacityScheduler on Amazon EMR on Amazon EC2 for multi-tenant...

Configure Hadoop YARN CapacityScheduler on Amazon EMR on Amazon EC2 for multi-tenant heterogeneous workloads

Apache Hadoop YARN (But One other Useful resource Negotiator) is a cluster useful resource supervisor answerable for assigning computational assets (CPU, reminiscence, I/O), and scheduling and monitoring jobs submitted to a Hadoop cluster. This generic framework permits for efficient administration of cluster assets for distributed information processing frameworks, corresponding to Apache Spark, Apache MapReduce, and Apache Hive. When supported by the framework, Amazon EMR by default makes use of Hadoop YARN. Please be aware that not all frameworks supplied by Amazon EMR use Hadoop YARN, corresponding to Trino/Presto and Apache HBase.

On this submit, we talk about varied parts of Hadoop YARN, and perceive how parts work together with one another to allocate assets, schedule purposes, and monitor purposes. We dive deep into the particular configurations to customise Hadoop YARN’s CapacityScheduler to extend cluster effectivity by allocating assets in a well timed and safe method in a multi-tenant cluster. We take an opinionated take a look at the configurations for CapacityScheduler and configure them on Amazon EMR on Amazon Elastic Compute Cloud (Amazon EC2) to resolve for the frequent useful resource allocation, useful resource rivalry, and job scheduling challenges in a multi-tenant cluster.

We dive deep into CapacityScheduler as a result of Amazon EMR makes use of CapacityScheduler by default, and CapacityScheduler has advantages over different schedulers for working workloads with heterogeneous useful resource consumption.

Resolution overview

Trendy information platforms usually run purposes on Amazon EMR with the next traits:

  • Heterogeneous useful resource consumption patterns by jobs, corresponding to computation-bound jobs, I/O-bound jobs, or memory-bound jobs
  • A number of groups working jobs with an expectation to obtain an agreed-upon share of cluster assets and full jobs in a well timed method
  • Cluster admins usually must cater to one-time requests for working jobs with out impacting scheduled jobs
  • Cluster admins need to guarantee customers are utilizing their assigned capability and never utilizing others
  • Cluster admins need to make the most of the assets effectively and allocate all accessible assets to at the moment working jobs, however need to retain the flexibility to reclaim assets mechanically ought to there be a declare for the agreed-upon cluster assets from different jobs

As an example these use instances, let’s think about the next situation:

  • user1 and user2 don’t belong to any group and use cluster assets periodically on an advert hoc foundation
  • A knowledge platform and analytics program has two groups:
    • A data_engineering group, containing user3
    • A data_science group, containing user4
  • user5 and user6 (and plenty of different customers) sporadically use cluster assets to run jobs

Primarily based on this situation, the scheduler queue could appear like the next diagram. Be aware of the frequent configurations utilized to all queues, the overrides, and the consumer/groups-to-queue mappings.

Capacity Scheduler Queue Setup

Within the subsequent sections, we are going to perceive the high-level parts of Hadoop YARN, talk about the assorted sorts of schedulers accessible in Hadoop YARN, evaluation the core ideas of CapacityScheduler, and showcase easy methods to implement this CapacityScheduler queue setup on Amazon EMR (on Amazon EC2). You’ll be able to skip to Code walkthrough part in case you are already accustomed to Hadoop YARN and CapacityScheduler.

Overview of Hadoop YARN

At a excessive degree, Hadoop YARN consists of three predominant parts:

  • ResourceManager (one per major node)
  • ApplicationMaster (one per software)
  • NodeManager (one per node)

The next diagram exhibits the primary parts and their interplay with one another.

Apache Hadoop Yarn Architecture Diagram1

Earlier than diving additional, let’s make clear what Hadoop YARN’s ResourceContainer (or container) is. A ResourceContainer represents a set of bodily computational assets. It’s an abstraction used to bundle assets into distinct, allocatable unit.


The ResourceManager is answerable for useful resource administration and making allocation selections. It’s the ResourceManager’s duty to establish and allocate assets to a job upon submission to Hadoop YARN. The ResourceManager has two predominant parts:

  • ApplicationsManager (to not be confused with ApplicationMaster)
  • Scheduler


The ApplicationsManager is answerable for accepting job submissions, negotiating the primary container for working ApplicationMaster, and offering the service for restarting the ApplicationMaster on failure.


The Scheduler is answerable for scheduling allocation of assets to the roles. The Scheduler performs its scheduling operate based mostly on the useful resource necessities of the roles. The Scheduler is a pluggable interface. Hadoop YARN at the moment gives three implementations:

  • CapacityScheduler – A pluggable scheduler for Hadoop that enables for a number of tenants to securely share a cluster such that jobs are allotted assets in a well timed method below constraints of allotted capacities. The implementation is accessible on GitHub. The Java concrete class is org.apache.hadoop.yarn.server.resourcemanager.scheduler.capability.CapacityScheduler. On this submit, we primarily deal with CapacityScheduler, which is the default scheduler on Amazon EMR (on Amazon EC2).
  • FairScheduler – A pluggable scheduler for Hadoop that enables Hadoop YARN purposes to share assets in clusters pretty. The implementation is accessible on GitHub. The Java concrete class is org.apache.hadoop.yarn.server.resourcemanager.scheduler.honest.FairScheduler.
  • FifoScheduler – A pluggable scheduler for Hadoop that enables Hadoop YARN purposes share assets in clusters in a first-in-first-out foundation. The implementation is accessible on GitHub. The Java concrete class is org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler.


Upon negotiating the primary container by ApplicationsManager, the per-application ApplicationMaster has the duty of negotiating the remainder of the suitable assets from the Scheduler, monitoring their standing, and monitoring progress.


The NodeManager is answerable for launching and managing containers on a node.

Hadoop YARN on Amazon EMR

By default, Amazon EMR (on Amazon EC2) makes use of Hadoop YARN for cluster administration for the distributed information processing frameworks that assist Hadoop YARN as a useful resource supervisor, like Apache Spark, Apache MapReduce, and Apache Hive. Amazon EMR gives a number of smart default settings that work for many situations. Nonetheless, each information platform is completely different and has particular wants. Amazon EMR gives the flexibility to customise the setting at cluster creation utilizing configuration classifications . You may also reconfigure Amazon EMR cluster purposes and specify extra configuration classifications for every occasion group in a working cluster utilizing AWS Command Line Interface (AWS CLI), or the AWS SDK.


CapacityScheduler is dependent upon ResourceCalculator to establish the accessible assets and calculate the allocation of the assets to ApplicationMaster. The ResourceCalculator is an summary Java class. Hadoop YARN at the moment gives two implementations:

  • DefaultResourceCalculator – In DefaultResourceCalculator, assets are calculated based mostly on reminiscence alone.
  • DominantResourceCalculatorDominantResourceCalculator is predicated on the Dominant Useful resource Equity (DRF) mannequin of useful resource allocation. The paper Dominant Useful resource Equity: Honest Allocation of A number of Useful resource Sorts, Ghodsi et al. [2011] describes DRF as follows: “DRF computes the share of every useful resource allotted to that consumer. The utmost amongst all shares of a consumer known as that consumer’s dominant share, and the useful resource comparable to the dominant share known as the dominant useful resource. Totally different customers could have completely different dominant assets. For instance, the dominant useful resource of a consumer working a computation-bound job is CPU, whereas the dominant useful resource of a consumer working an I/O-bound job is bandwidth. DRF merely applies max-min equity throughout customers’ dominant shares. That’s, DRF seeks to maximise the smallest dominant share within the system, then the second-smallest, and so forth.”

Due to DRF, DominantResourceCalculator is a greater ResourceCalculator for information processing environments working heterogeneous workloads. By default, Amazon EMR makes use of DefaultResourceCalculator for CapacityScheduler. This may be verified by checking the worth of yarn.scheduler.capability.resource-calculator parameter in /and so forth/hadoop/conf/capacity-scheduler.xml.

Code walkthrough

CapacityScheduler gives a number of parameters to customise the scheduling conduct to satisfy particular wants. For a listing of accessible parameters, seek advice from Hadoop: CapacityScheduler.

Confer with the configurations part in cloudformation/templates/emr.yaml to evaluation all of the CapacityScheduler parameters set as a part of this submit. On this instance, we use two classifiers of Amazon EMR (on Amazon EC2):

  • yarn-site – The classification to replace yarn-site.xml
  • capacity-scheduler – The classification to replace capacity-scheduler.xml

For varied sorts of classification accessible in Amazon EMR, seek advice from Customizing cluster and software configuration with earlier AMI variations of Amazon EMR.

Within the AWS CloudFormation template, we now have modified the ResourceCalculator of CapacityScheduler from the defaults, DefaultResourceCalculator to DominantResourceCalculator. Knowledge processing environments tends to run completely different sorts of jobs, for instance, computation-bound jobs consuming heavy CPU, I/O-bound jobs consuming heavy bandwidth, and memory-bound jobs consuming heavy reminiscence. As beforehand acknowledged, DominantResourceCalculator is best suited to such environments resulting from its Dominant Useful resource Equity mannequin of useful resource allocation. In case your information processing surroundings solely runs memory-bound jobs, then modifying this parameter isn’t needed.

You’ll find the codebase within the AWS Samples GitHub repository.


For deploying the answer, it’s best to have the next stipulations:

Deploy the answer

To deploy the answer, full the next steps:

  • Obtain the supply code from the AWS Samples GitHub repository:
    git clone git@github.com:aws-samples/amazon-emr-yarn-capacity-scheduler.git

  • Create an Amazon Easy Storage Service (Amazon S3) bucket:
    aws s3api create-bucket --bucket emr-yarn-capacity-scheduler-<AWS_ACCOUNT_ID>-<AWS_REGION> --region <AWS_REGION>

  • Copy the cloned repository to the Amazon S3 bucket:
    aws s3 cp --recursive amazon-emr-yarn-capacity-scheduler s3://emr-yarn-capacity-scheduler-<AWS_ACCOUNT_ID>-<AWS_REGION>/

    1. ArtifactsS3Repository – The S3 bucket title that was created within the earlier step (emr-yarn-capacity-scheduler-<AWS_ACCOUNT_ID>-<AWS_REGION>).
    2. emrKeyName – An present EC2 key title. In case you don’t have an present key and need to create a brand new key, seek advice from Use an Amazon EC2 key pair for SSH credentials.
    3. clientCIDR – The CIDR vary of the shopper machine for accessing the EMR cluster by way of SSH. You’ll be able to run the next command to establish the IP of the shopper machine: echo "$(curl -s http://checkip.amazonaws.com)/32"
  • Deploy the AWS CloudFormation templates:
    aws cloudformation create-stack 
    --stack-name emr-yarn-capacity-scheduler 
    --template-url https://emr-yarn-capacity-scheduler-<AWS_ACCOUNT_ID>-<AWS_REGION>.s3.amazonaws.com/cloudformation/templates/predominant.yaml 
    --parameters file://amazon-emr-yarn-capacity-scheduler/cloudformation/parameters/parameters.json 
    --capabilities CAPABILITY_NAMED_IAM 
    --region <AWS_REGION>

  • On the AWS CloudFormation console, test for the profitable deployment of the next stacks.

AWS CloudFormation Stack Deployment

  • On the Amazon EMR console, test for the profitable creation of the emr-cluster-capacity-scheduler cluster.
  • Select the cluster and on the Configurations tab, evaluation the properties below the capacity-scheduler and yarn-site classification labels.

AWS EMR Configurations

Apache Hadoop YARN UI

  • SSH into the emr-cluster-capacity-scheduler cluster and evaluation the next information.For directions on easy methods to SSH into the EMR major node, seek advice from Hook up with the grasp node utilizing SSH.
    • /and so forth/hadoop/conf/yarn-site.xml
    • /and so forth/hadoop/conf/capacity-scheduler.xml

All of the parameters set utilizing the yarn-site and capacity-scheduler classifiers are mirrored in these information. If an admin desires to replace CapacityScheduler configs, they will immediately replace capacity-scheduler.xml and run the next command to use the modifications with out interrupting any working jobs and companies:

yarn rmadmin -resfreshQueues

Modifications to yarn-site.xml require the ResourceManager service to be restarted, which interrupts the working jobs. As a greatest follow, chorus from guide modifications and use model management for change administration.

The CloudFormation template provides a bootstrap motion to create check customers (user1, user2, user3, user4, user5 and user6) on all of the nodes and provides a step script to create HDFS directories for the check customers.

Customers can SSH into the  major node, sudo as completely different customers and submit Spark jobs to confirm the job submission and CapacityScheduler conduct:

[hadoop@ip-xx-x-xx-xxx ~]$ sudo su - user1
[user1@ip-xx-x-xx-xxx ~]$ spark-submit --master yarn --deploy-mode cluster 
--class org.apache.spark.examples.SparkPi /usr/lib/spark/examples/jars/spark-examples.jar

You’ll be able to validate the outcomes from the useful resource supervisor internet UI.

Apache Hadoop YARN Jobs List

Clear up

To keep away from incurring future expenses, delete the assets you created.

  • Delete the CloudFormation stack:
    aws cloudformation delete-stack --stack-name emr-yarn-capacity-scheduler

  • Delete the S3 bucket:
    aws s3 rb s3://emr-yarn-capacity-scheduler-<AWS_ACCOUNT_ID>-<AWS_REGION> --force

The command deletes the bucket and all information beneath it. The information will not be recoverable after deletion.


On this submit, we mentioned Apache Hadoop YARN and its varied parts. We mentioned the sorts of schedulers accessible in Hadoop YARN. We dived deep in to the specifics of Hadoop YARN CapacityScheduler and using Dominant Useful resource Equity to effectively allocate assets to submitted jobs. We additionally showcased easy methods to implement the mentioned ideas utilizing AWS CloudFormation.

We encourage you to make use of this submit as a place to begin to implement CapacityScheduler on Amazon EMR (on Amazon EC2) and customise the answer to satisfy your particular information platform targets.

Concerning the authors

Suvojit Dasgupta is a Sr. Lakehouse Architect at Amazon Net Providers. He works with clients to design and construct information options on AWS.

Bharat Gamini is a Knowledge Architect targeted on huge information and analytics at Amazon Net Providers. He helps clients architect and construct extremely scalable, strong, and safe cloud-based analytical options on AWS.


Most Popular

Recent Comments