Yesterday when I was working on the Hadoop UT timeout issue, I found something that need to be noted.

Normally we can set the limits by editing the file /etc/security/limits.conf, for example, the following line:
   * soft nproc 10240
will change the default maximum number of process that user can open.
There's a hidden thing here:
By default, system will read this file first and then read all files in the folder "/etc/security/limits.d". In CentOS (6.4), there's an default file here named "90-nproc.conf", it has the following content:
"
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

*          soft    nproc     1024
root       soft    nproc     unlimited
"
YES, it overwrite the default value for normal user here. 
So if you only set the nproc in limits.conf, no matter how much you set, you only got 1024!
The correct way to do this is to create another file in /etc/security/limits.d named something that is alphabetically greater than the file "90-nproc.conf" and put your settings there, for example, "stanley.conf" will be a good name for user stanley(smile)