Thread scheduling implications in Java
Our discussion of how threads work and in particular
how thread scheduling operates on typical platform leads
to implications and restrictions that we can expect from Java threads, which we'll outline here.
In some cases, we point to separate pages with fuller discussions.
Thread control
Firstly, the way that thread scheduling works has implications on various Java methods that
control threads (which generally interact with the underlying operating system thread APIs):
- the granularity and responsiveness of the Thread.sleep()
method is largely determined by the scheduler's interrupt period and by
how quickly the slept thread becomes the "chosen" thread again;
- the precise function of the setPriority() method depends on the specific
OS's interpretation of priority (and which underlying API call Java actually uses when several are available):
for more information, see the more detailed section on thread priority;
- the behaviour of the Thread.yield() method is similarly determined
by what particuar underlying API calls do, and which is actually chosen by the VM implementation.
"Granularity" of threads
Although our introduction to threading focussed on how to
create a thread, it turns out that it isn't appropriate to create a brand new thread
just for a very small task. Threads are actually quite a "coarse-grained" unit of execution, for
reasons that are hopefully becoming clear from the previous sections.
Overhead and limits of creating and destroying threads
We mentioned that certain structures need to be allocated and deallocated when a thread
is created or killed, including a stack and some kind of thread status structure or "control block".
In particular, the latter links in to global, shared resources about the currently running
threads and its access requires proper synchronization by the OS. The upshot is that:
- creating and tearing down threads isn't free: there'll be some CPU overhead
each time we do so;
- there may be some moderate limit on the number of threads that
can be created, determined by the resources that a thread needs to have allocated
(if a process has 2GB of address space, and each thread as 512K of stack, that means
a maximum of a few thousands threads per process).
Although it's rare to do so, as of Java 1.4, it is possible to specify a stack size
to the Thread constructor.
Avoiding thread overhead in Java
In applications such as servers that need to continually execute short, multithreaded
tasks, the usual way to avoid the overhead of repeated thread creation is to create a thread pool.
That is, a number of threads are initially created and then sit permanently waiting for jobs to be
sent to them.
From Java 5, the Java API includes the ThreadPoolExecutor and various related
classes in the java.util.concurrent package for implementing job queues and
thread pools.
Next: context switches
On the next page, we look in more detail at the issue of context switches: namely,
what happens when the CPU "juggles" the different threads between the available CPUs. We outline techniques to
reduce the number of context switches in Java.
If you enjoy this Java programming article, please share with friends and colleagues. Follow the author on Twitter for the latest news and rants.
Editorial page content written by Neil Coffey. Copyright © Javamex UK 2021. All rights reserved.