have a cluster managed with Yarn and runs Spark jobs, the components were installed using Ambari (18.104.22.168-235). I have 6 hosts each with 6 cores. I use Fair scheduler
I want Yarn to automatically add/remove executor cores, but no matter what I do it doesn't work
Relevant Spark configuration (configured in Ambari):
Relevant Yarn configuration (configured in Ambari):
Seems like --num-executors is being passed to spark client. If thats specified , then dynamic resource allocation does not kick in. Can you check what parameters being passed to the Spark submit ?
Hi and thanks for the reply, I ran the calculations from Jupyter Notebook so I didn't actively called spark-submit. spark.executor.instances is the equivalent configuration property which is not set. Yarn's configuration of minimal number of cores and relevant spark.dynamicAllocation properties were set though,
On Wed, Aug 1, 2018 at 10:45 PM, Suma Shivaprasad <[hidden email]> wrote:
If you are setting up configuration in code it won't work. You should pass configuration parameters explicitly.
Spark-submit --num-executers ....
Sudeep Singh Thakur
On Thu 2 Aug, 2018, 5:56 AM Anton Puzanov, <[hidden email]> wrote:
|Free forum by Nabble||Edit this page|