If SCHED_AUTOGROUP is set, the nice value is not respected across cgroups. Simple scenario: A) Without SCHED_AUTOGROUP: 1. Open two terminals 2. In first terminal run: 'yes > /dev/null' 3. In second terminal run: 'nice -n 19 yes > /dev/null' Observe with 'top' that 'yes' with nice 19 gets significantly less portion of CPU time than the other one. Which is good and expected. B) With SCHED_AUTOGROUP turned on: 1. Open two terminals 2. In first terminal run: 'yes > /dev/null' 3. In second terminal run: 'nice -n 19 yes > /dev/null' Observe with 'top' that 'yes' with nice 19 gets *equal* portion of CPU time as the other one with nice 0. IMO, not so good and expected. Expected behaviour is that in B) the CPU time between the two 'yes' processes is distributed exactly as in A). Of course, this is a very simple scenario; with one process within each cgroup. What the behaviour would be with more processes, each with *different* nice value, in a same cgroup, may not be so trivial to decide and it may be a topic for discussion. But whatever will be decided, the basic scenario described here should always work as if SCHED_AUTOGROUP was turned off. One possible general behaviour could be that each cgroup will have an internal (or virtual) nice value itself, computed as an average of nice values of all processes within a cgroup.
That is exactly the expected behaviour of groups. Time is fairly distributed between groups (equally if each group has the same weight), after that it is fairly distributed between the tasks of that group (equally if ...). The auto-group stuff creates a group per tty, which with the above corresponds to a group per terminal.
> That is exactly the expected behaviour of groups. Time is fairly distributed > between groups... That is OK. But I'd expect that "nice" will be taken into account also for a grup as such. Take this bug as a "wish for a feature" which you may consider to implement. If not, that's fine, I will not use cgroups and stick with plain nice.