Errors when init LinuxContianerExecutor on Hadoop-3.0.0-alpha2

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Errors when init LinuxContianerExecutor on Hadoop-3.0.0-alpha2

Jasson Chenwei
hi, all

I am trying to configuring Cgroup and Docker runtime on Hadoop-3.0.0-alpha2.  Based on the documentation, LinuxContainerExecutor is required. However, I do not want to set up a secure cluster. So, Is any way to bypass this?


I also noticed that Cgroup doesn't need a secured cluster. However, after configuring 

 <property>
        <name>yarn.nodemanager.container-executor.class</name>
        <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
 </property>

 <property>
        <name>yarn.nodemanager.linux-container-executor.resources-handler.class</name>
        <value>org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler</value>
 </property>

I would have this errors:

2017-05-16 19:23:01,856 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to initialize container executor
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:316)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:735)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:796)
Caused by: java.io.IOException: Linux container executor not configured properly (error=24)
    at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:244)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:314)
    ... 3 more
Caused by: org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException: ExitCodeException exitCode=24: File /home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop/container-executor.cfg must be owned by root, but is owned by 1001

    at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:179)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:205)
    at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:237)
    ... 4 more
Caused by: ExitCodeException exitCode=24: File /home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop/container-executor.cfg must be owned by root, but is owned by 1001

    at org.apache.hadoop.util.Shell.runCommand(Shell.java:981)
    at org.apache.hadoop.util.Shell.run(Shell.java:884)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1180)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:151)
    ... 6 more

Anyone could help me solve this?


PS. After checking the source code,  I found the Nodemanager will call  privilegedOperationExecutor.executePrivilegedOperation(checkSetupOp,
          false) and finally call contaienr-executor under HADOOP_HOME/bin folder.  So I directly execute ./bin/contaienr-executor will also report this error, which does not make much sense for me.


Best, appreciate any help.



Wei

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Errors when init LinuxContianerExecutor on Hadoop-3.0.0-alpha2

Weiwei Yang
Hi Jasson

The problem seems to the incorrect file permission, the doc says: The container-executor program must be owned by root and have the permission set ---sr-s---.

Weiwei Yang

On 17 May 2017, 9:37 AM +0800, Jasson Chenwei <[hidden email]>, wrote:
hi, all

I am trying to configuring Cgroup and Docker runtime on
Hadoop-3.0.0-alpha2. Based on the documentation, LinuxContainerExecutor is
required. However, I do not want to set up a secure cluster. So, Is any way
to bypass this?


I also noticed that Cgroup doesn't need a secured cluster. However, after
configuring

* <property>*
* <name>yarn.nodemanager.container-executor.class</name>*
*
<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>*
* </property>*

* <property>*
*
<name>yarn.nodemanager.linux-container-executor.resources-handler.class</name>*
*
<value>org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler</value>*
* </property>*

I would have this errors:

*2017-05-16 19:23:01,856 FATAL
org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting
NodeManager*
*org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to
initialize container executor*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:316)*
* at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:735)*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:796)*
*Caused by: java.io.IOException: Linux container executor not configured
properly (error=24)*
* at
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:244)*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:314)*
* ... 3 more*
*Caused by:
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
ExitCodeException exitCode=24: File
/home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop/container-executor.cfg
must be owned by root, but is owned by 1001*

* at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:179)*
* at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:205)*
* at
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:237)*
* ... 4 more*
*Caused by: ExitCodeException exitCode=24: File
/home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop/container-executor.cfg
must be owned by root, but is owned by 1001*

* at org.apache.hadoop.util.Shell.runCommand(Shell.java:981)*
* at org.apache.hadoop.util.Shell.run(Shell.java:884)*
* at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1180)*
* at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:151)*
* ... 6 more*

Anyone could help me solve this?


PS. After checking the source code, I found the Nodemanager will call
*privilegedOperationExecutor.executePrivilegedOperation(checkSetupOp,*
* false)* and finally call* contaienr-executor* under
HADOOP_HOME/bin folder. So I directly execute ./bin/contaienr-executor
will also report this error, which does not make much sense for me.


Best, appreciate any help.



Wei
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Errors when init LinuxContianerExecutor on Hadoop-3.0.0-alpha2

Jasson Chenwei
hi, Weiwei

Thanks for the reply. I have configured the permission as required:

Inline image 1


However, I still have the error, 2017-05-16 20:04:32,448 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to initialize container executor
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:316)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:735)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:796)
Caused by: java.io.IOException: Linux container executor not configured properly (error=24)
    at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:244)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:314)
    ... 3 more
Caused by: org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException: ExitCodeException exitCode=24: File /home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop must be owned by root, but is owned by 1001

    at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:179)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:205)
    at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:237)
    ... 4 more
Caused by: ExitCodeException exitCode=24: File /home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop must be owned by root, but is owned by 1001

    at org.apache.hadoop.util.Shell.runCommand(Shell.java:981)
    at org.apache.hadoop.util.Shell.run(Shell.java:884)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1180)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:151)
    ... 6 more


As shown above, it required all the whole /etc/hadoop folder to be owned by root, which also does not make sense to me. I checked the container-executor code, found this piece of code to do the permission checking:

/**
 * Ensure that the configuration file and all of the containing directories
 * are only writable by root. Otherwise, an attacker can change the
 * configuration and potentially cause damage.
 * returns 0 if permissions are ok
 */
int check_configuration_permissions(const char* file_name) {
  // copy the input so that we can modify it with dirname
  char* dir = strdup(file_name);
  char* buffer = dir;
  do {
    if (!is_only_root_writable(dir)) {
      free(buffer);
      return -1;
    }
    dir = dirname(dir);
  } while (strcmp(dir, "/") != 0);
  free(buffer);
  return 0;
}

Obviously, it's a recursive checking until the "/". I nenver try secure yarn before, so I am not sure if this is related to configuration to secure yarn. So I am hoping to bypass the these "anoying" securing checking to test directly test docker runtim.



Wei 





On Tue, May 16, 2017 at 7:44 PM, Weiwei Yang <[hidden email]> wrote:
Hi Jasson

The problem seems to the incorrect file permission, the doc says: The container-executor program must be owned by root and have the permission set ---sr-s---.

Weiwei Yang

On 17 May 2017, 9:37 AM +0800, Jasson Chenwei <[hidden email]>, wrote:
hi, all

I am trying to configuring Cgroup and Docker runtime on
Hadoop-3.0.0-alpha2. Based on the documentation, LinuxContainerExecutor is
required. However, I do not want to set up a secure cluster. So, Is any way
to bypass this?


I also noticed that Cgroup doesn't need a secured cluster. However, after
configuring

* <property>*
* <name>yarn.nodemanager.container-executor.class</name>*
*
<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>*
* </property>*

* <property>*
*
<name>yarn.nodemanager.linux-container-executor.resources-handler.class</name>*
*
<value>org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler</value>*
* </property>*

I would have this errors:

*2017-05-16 19:23:01,856 FATAL
org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting
NodeManager*
*org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to
initialize container executor*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:316)*
* at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:735)*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:796)*
*Caused by: java.io.IOException: Linux container executor not configured
properly (error=24)*
* at
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:244)*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:314)*
* ... 3 more*
*Caused by:
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
ExitCodeException exitCode=24: File
/home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop/container-executor.cfg
must be owned by root, but is owned by 1001*

* at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:179)*
* at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:205)*
* at
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:237)*
* ... 4 more*
*Caused by: ExitCodeException exitCode=24: File
/home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop/container-executor.cfg
must be owned by root, but is owned by 1001*

* at org.apache.hadoop.util.Shell.runCommand(Shell.java:981)*
* at org.apache.hadoop.util.Shell.run(Shell.java:884)*
* at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1180)*
* at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:151)*
* ... 6 more*

Anyone could help me solve this?


PS. After checking the source code, I found the Nodemanager will call
*privilegedOperationExecutor.executePrivilegedOperation(checkSetupOp,*
* false)* and finally call* contaienr-executor* under
HADOOP_HOME/bin folder. So I directly execute ./bin/contaienr-executor
will also report this error, which does not make much sense for me.


Best, appreciate any help.



Wei

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Errors when init LinuxContianerExecutor on Hadoop-3.0.0-alpha2

Weiwei Yang
Hi Jasson

They need to be owned by root is to prevent malicious user modifying them. I don’t think you need to set 6050 to container-executor.cfg, that is the permission for the executable, i.e hadoop-yarn/bin/container-executor. For container-executor.cfg (the confg file), it needs to be owned by root like the code implies, same as the parent dirs. See more doc https://hadoop.apache.org/docs/r3.0.0-alpha2/hadoop-project-dist/hadoop-common/SecureMode.html#LinuxContainerExecutor.

Weiwei Yang

On 17 May 2017, 10:12 AM +0800, Jasson Chenwei <[hidden email]>, wrote:
hi, Weiwei

Thanks for the reply. I have configured the permission as required:

Inline image 1


However, I still have the error, 2017-05-16 20:04:32,448 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to initialize container executor
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:316)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:735)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:796)
Caused by: java.io.IOException: Linux container executor not configured properly (error=24)
    at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:244)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:314)
    ... 3 more
Caused by: org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException: ExitCodeException exitCode=24: File /home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop must be owned by root, but is owned by 1001

    at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:179)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:205)
    at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:237)
    ... 4 more
Caused by: ExitCodeException exitCode=24: File /home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop must be owned by root, but is owned by 1001

    at org.apache.hadoop.util.Shell.runCommand(Shell.java:981)
    at org.apache.hadoop.util.Shell.run(Shell.java:884)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1180)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:151)
    ... 6 more


As shown above, it required all the whole /etc/hadoop folder to be owned by root, which also does not make sense to me. I checked the container-executor code, found this piece of code to do the permission checking:

/**
 * Ensure that the configuration file and all of the containing directories
 * are only writable by root. Otherwise, an attacker can change the
 * configuration and potentially cause damage.
 * returns 0 if permissions are ok
 */
int check_configuration_permissions(const char* file_name) {
  // copy the input so that we can modify it with dirname
  char* dir = strdup(file_name);
  char* buffer = dir;
  do {
    if (!is_only_root_writable(dir)) {
      free(buffer);
      return -1;
    }
    dir = dirname(dir);
  } while (strcmp(dir, "/") != 0);
  free(buffer);
  return 0;
}

Obviously, it's a recursive checking until the "/". I nenver try secure yarn before, so I am not sure if this is related to configuration to secure yarn. So I am hoping to bypass the these "anoying" securing checking to test directly test docker runtim.



Wei 





On Tue, May 16, 2017 at 7:44 PM, Weiwei Yang <[hidden email]> wrote:
Hi Jasson

The problem seems to the incorrect file permission, the doc says: The container-executor program must be owned by root and have the permission set ---sr-s---.

Weiwei Yang

On 17 May 2017, 9:37 AM +0800, Jasson Chenwei <[hidden email]>, wrote:
hi, all

I am trying to configuring Cgroup and Docker runtime on
Hadoop-3.0.0-alpha2. Based on the documentation, LinuxContainerExecutor is
required. However, I do not want to set up a secure cluster. So, Is any way
to bypass this?


I also noticed that Cgroup doesn't need a secured cluster. However, after
configuring

* <property>*
* <name>yarn.nodemanager.container-executor.class</name>*
*
<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>*
* </property>*

* <property>*
*
<name>yarn.nodemanager.linux-container-executor.resources-handler.class</name>*
*
<value>org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler</value>*
* </property>*

I would have this errors:

*2017-05-16 19:23:01,856 FATAL
org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting
NodeManager*
*org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to
initialize container executor*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:316)*
* at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:735)*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:796)*
*Caused by: java.io.IOException: Linux container executor not configured
properly (error=24)*
* at
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:244)*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:314)*
* ... 3 more*
*Caused by:
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
ExitCodeException exitCode=24: File
/home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop/container-executor.cfg
must be owned by root, but is owned by 1001*

* at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:179)*
* at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:205)*
* at
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:237)*
* ... 4 more*
*Caused by: ExitCodeException exitCode=24: File
/home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop/container-executor.cfg
must be owned by root, but is owned by 1001*

* at org.apache.hadoop.util.Shell.runCommand(Shell.java:981)*
* at org.apache.hadoop.util.Shell.run(Shell.java:884)*
* at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1180)*
* at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:151)*
* ... 6 more*

Anyone could help me solve this?


PS. After checking the source code, I found the Nodemanager will call
*privilegedOperationExecutor.executePrivilegedOperation(checkSetupOp,*
* false)* and finally call* contaienr-executor* under
HADOOP_HOME/bin folder. So I directly execute ./bin/contaienr-executor
will also report this error, which does not make much sense for me.


Best, appreciate any help.



Wei

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Errors when init LinuxContianerExecutor on Hadoop-3.0.0-alpha2

Jasson Chenwei
Make sense to me. I will check the configuration.  

Many thanks for the guide.


Wei

On Tue, May 16, 2017 at 8:40 PM, Weiwei Yang <[hidden email]> wrote:
Hi Jasson

They need to be owned by root is to prevent malicious user modifying them. I don’t think you need to set 6050 to container-executor.cfg, that is the permission for the executable, i.e hadoop-yarn/bin/container-executor. For container-executor.cfg (the confg file), it needs to be owned by root like the code implies, same as the parent dirs. See more doc https://hadoop.apache.org/docs/r3.0.0-alpha2/hadoop-project-dist/hadoop-common/SecureMode.html#LinuxContainerExecutor.

Weiwei Yang

On 17 May 2017, 10:12 AM +0800, Jasson Chenwei <[hidden email]>, wrote:
hi, Weiwei

Thanks for the reply. I have configured the permission as required:

Inline image 1


However, I still have the error, 2017-05-16 20:04:32,448 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to initialize container executor
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:316)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:735)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:796)
Caused by: java.io.IOException: Linux container executor not configured properly (error=24)
    at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:244)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:314)
    ... 3 more
Caused by: org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException: ExitCodeException exitCode=24: File /home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop must be owned by root, but is owned by 1001

    at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:179)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:205)
    at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:237)
    ... 4 more
Caused by: ExitCodeException exitCode=24: File /home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop must be owned by root, but is owned by 1001

    at org.apache.hadoop.util.Shell.runCommand(Shell.java:981)
    at org.apache.hadoop.util.Shell.run(Shell.java:884)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1180)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:151)
    ... 6 more


As shown above, it required all the whole /etc/hadoop folder to be owned by root, which also does not make sense to me. I checked the container-executor code, found this piece of code to do the permission checking:

/**
 * Ensure that the configuration file and all of the containing directories
 * are only writable by root. Otherwise, an attacker can change the
 * configuration and potentially cause damage.
 * returns 0 if permissions are ok
 */
int check_configuration_permissions(const char* file_name) {
  // copy the input so that we can modify it with dirname
  char* dir = strdup(file_name);
  char* buffer = dir;
  do {
    if (!is_only_root_writable(dir)) {
      free(buffer);
      return -1;
    }
    dir = dirname(dir);
  } while (strcmp(dir, "/") != 0);
  free(buffer);
  return 0;
}

Obviously, it's a recursive checking until the "/". I nenver try secure yarn before, so I am not sure if this is related to configuration to secure yarn. So I am hoping to bypass the these "anoying" securing checking to test directly test docker runtim.



Wei 





On Tue, May 16, 2017 at 7:44 PM, Weiwei Yang <[hidden email]> wrote:
Hi Jasson

The problem seems to the incorrect file permission, the doc says: The container-executor program must be owned by root and have the permission set ---sr-s---.

Weiwei Yang

On 17 May 2017, 9:37 AM +0800, Jasson Chenwei <[hidden email]>, wrote:
hi, all

I am trying to configuring Cgroup and Docker runtime on
Hadoop-3.0.0-alpha2. Based on the documentation, LinuxContainerExecutor is
required. However, I do not want to set up a secure cluster. So, Is any way
to bypass this?


I also noticed that Cgroup doesn't need a secured cluster. However, after
configuring

* <property>*
* <name>yarn.nodemanager.container-executor.class</name>*
*
<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>*
* </property>*

* <property>*
*
<name>yarn.nodemanager.linux-container-executor.resources-handler.class</name>*
*
<value>org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler</value>*
* </property>*

I would have this errors:

*2017-05-16 19:23:01,856 FATAL
org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting
NodeManager*
*org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to
initialize container executor*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:316)*
* at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:735)*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:796)*
*Caused by: java.io.IOException: Linux container executor not configured
properly (error=24)*
* at
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:244)*
* at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:314)*
* ... 3 more*
*Caused by:
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
ExitCodeException exitCode=24: File
/home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop/container-executor.cfg
must be owned by root, but is owned by 1001*

* at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:179)*
* at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:205)*
* at
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:237)*
* ... 4 more*
*Caused by: ExitCodeException exitCode=24: File
/home/cwei/project/hadoop-3.0.0-alpha2/etc/hadoop/container-executor.cfg
must be owned by root, but is owned by 1001*

* at org.apache.hadoop.util.Shell.runCommand(Shell.java:981)*
* at org.apache.hadoop.util.Shell.run(Shell.java:884)*
* at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1180)*
* at
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:151)*
* ... 6 more*

Anyone could help me solve this?


PS. After checking the source code, I found the Nodemanager will call
*privilegedOperationExecutor.executePrivilegedOperation(checkSetupOp,*
* false)* and finally call* contaienr-executor* under
HADOOP_HOME/bin folder. So I directly execute ./bin/contaienr-executor
will also report this error, which does not make much sense for me.


Best, appreciate any help.



Wei


Loading...