You wouldn't with this code, since accurately measuring the time that code takes to execute is a difficult task.

To get to the question posed by your question title (you should really ask one question at a time...) the accuracy of said functions is dictated by the operating system. On Linux, the system clock granularity is 10ms, so timed process suspension via nanosleep() is only guaranteed to be accurate to 10ms, and even then it's not guaranteed to sleep for exactly the time you specify. (See below.)

On Windows, the clock granularity can be changed to accommodate power management needs (e.g. decrease the granularity to conserve battery power). See MSDN's documentation on the Sleep function.

Note that with Sleep()/nanosleep(), the OS only guarantees that the process suspension will last for at least as long as you specify. The execution of other processes can always delay resumption of your process.

Therefore, the key-up event sent by your code above will be sent at least 2.638 seconds later than the key-down event, and not a millisecond sooner. But it would be possible for the event to be sent 2.7, 2.8, or even 3 seconds later. (Or much later if a realtime process grabbed hold of the CPU and didn't relinquish control for some time.)

Answer from cdhowie on Stack Overflow
🌐
GNU
gnu.org › software › hello › manual › libc › Sleeping.html
Sleeping (The GNU C Library)
The sleep function is declared in unistd.h. Resist the temptation to implement a sleep for a fixed amount of time by using the return value of sleep, when nonzero, to call sleep again. This will work with a certain amount of accuracy as long as signals arrive infrequently.
Top answer
1 of 4
6

You wouldn't with this code, since accurately measuring the time that code takes to execute is a difficult task.

To get to the question posed by your question title (you should really ask one question at a time...) the accuracy of said functions is dictated by the operating system. On Linux, the system clock granularity is 10ms, so timed process suspension via nanosleep() is only guaranteed to be accurate to 10ms, and even then it's not guaranteed to sleep for exactly the time you specify. (See below.)

On Windows, the clock granularity can be changed to accommodate power management needs (e.g. decrease the granularity to conserve battery power). See MSDN's documentation on the Sleep function.

Note that with Sleep()/nanosleep(), the OS only guarantees that the process suspension will last for at least as long as you specify. The execution of other processes can always delay resumption of your process.

Therefore, the key-up event sent by your code above will be sent at least 2.638 seconds later than the key-down event, and not a millisecond sooner. But it would be possible for the event to be sent 2.7, 2.8, or even 3 seconds later. (Or much later if a realtime process grabbed hold of the CPU and didn't relinquish control for some time.)

2 of 4
1

Sleep works in terms of the standard Windows thread scheduling. It is accurate up to about 20-50 milliseconds.

So that it's ok for user experience-dependent things. However it's absolutely inappropriate for real-time things.

Beside of this, there're much better ways to simulate keyboard/mouse events. Please see SendInput.

🌐
GNU
gnu.org › software › coreutils › sleep
sleep invocation (GNU Coreutils 9.8)
Due to shell aliases and built-in sleep functions, using an unadorned sleep interactively or in a script may get you different functionality than that described here.
🌐
Google Groups
groups.google.com › g › comp.unix.programmer › c › WqpQWYkA_NM
GNU sleep's "infinity" parameter
>>> The worst known case of this is Linux 2.6.9 with glibc 2.3.4, which >>> can’t sleep more than 24.85 days (2^31 milliseconds). Similarly, >>> cygwin 1.5.x, which can’t sleep more than 49.7 days (2^32 milliseconds). >>> Solve this by breaking the sleep up into smaller chunks. */ >> >> I see. Which leads to: >> 1) Why not just use pause(2)? That's what I'd use if I were coding an >> infinite sleep. > >Then why not use it? Indeed. When I am invited to join the GNU coreutils team, I'll get right on it.
🌐
GNU
ftp.gnu.org › pub › old-gnu › Manuals › glibc-2.2.5 › html_node › Sleeping.html
The GNU C Library
Resist the temptation to implement a sleep for a fixed amount of time by using the return value of sleep, when nonzero, to call sleep again. This will work with a certain amount of accuracy as long as signals arrive infrequently. But each signal can cause the eventual wakeup time to be off by an additional second or so. Suppose a few signals happen to arrive in rapid succession by bad luck--there is no limit on how much this could shorten or lengthen the wait.
🌐
Arch Linux Forums
bbs.archlinux.org › viewtopic.php
How does sleep command works? / GNU/Linux Discussion / Arch Linux Forums
you can also "ctrl-c" to cancel sleep while it's waiting. ... It looks like a loop that just continuously counts the number of nanoseconds that have passed to see if you've reached the requested time. Actually keeping track of nanoseconds is handled by timespec.
🌐
Reddit
reddit.com › r/c_programming › explain how the sleep command works.
r/C_Programming on Reddit: Explain how the sleep command works.
March 9, 2022 -

I have the source for the sleep command from coreutils in front of me but I do not quite understand what is actually making sleep sleep. I see some stuff about errors but nothing that I pick out to be the piece that is actually doing the work of the program.

I am looking at Coreutils version 9.0 from gnu/software/coreutils index.

Also I'm sure the source is pretty self-explanatory but I'm not super fluent in C yet. It's much like translating from Spanish to English.

Top answer
1 of 4
29
sleep() is a system call. It basically requests that the operating system remove the program from the list of programs that are eligible to run, and to put it back on the list after a certain amount of time has passed. If there are other programs eligible to run, they now get their shot. Otherwise, the operating system just twiddles its thumbs until some program is eligible to run. One thing to beware of: just because your program got put back on the list of programs eligible to run doesn't mean that it starts running right away. If there are other programs ahead of it in the list, it could be a while. For this reason the sleep() function really sleeps for at least as long as you requested. It could be longer. This is not a real-time environment. One final issue: if your program should receive a signal while sleeping, the operating system will put it back on the eligible list right away, and the sleep() function will fail with EINTR.
2 of 4
19
The main function inside sleep.c includes: if (xnanosleep (seconds)) die (EXIT_FAILURE, errno, _("cannot read realtime clock")); Meaning it calls the xnanosleep function for which the xnanosleep gnulib module is called. And inside the xnanosleep.c file inside main function, we have: for (;;) { /* Linux-2.6.8.1's nanosleep returns -1, but doesn't set errno when resumed after being suspended. Earlier versions would set errno to EINTR. nanosleep from linux-2.6.10, as well as implementations by (all?) other vendors, doesn't return -1 in that case; either it continues sleeping (if time remains) or it returns zero (if the wake-up time has passed). */ errno = 0; if (nanosleep (&ts_sleep, NULL) == 0) break; if (errno != EINTR && errno != 0) return -1; } Which means it's simply calling the nanosleep function. Details about the nanosleep function can be found at man nanosleep.
Top answer
1 of 8
87

Bash has a "loadable" sleep which supports fractional seconds, and eliminates overheads of an external command:

$ cd bash-3.2.48/examples/loadables
$ make sleep && mv sleep sleep.so
$ enable -f sleep.so sleep

Then:

$ which sleep
/usr/bin/sleep
$ builtin sleep
sleep: usage: sleep seconds[.fraction]
$ time (for f in `seq 1 10`; do builtin sleep 0.1; done)
real    0m1.000s
user    0m0.004s
sys     0m0.004s

The downside is that the loadables may not be provided with your bash binary, so you would need to compile them yourself as shown (though on Solaris it would not necessarily be as simple as above).

As of bash-4.4 (September 2016) all the loadables are now built and installed by default on platforms that support it, though they are built as separate shared-object files, and without a .so suffix. Unless your distro/OS has done something creative (sadly RHEL/CentOS 8 build bash-4.4 with loadable extensions deliberately removed), you should be able to do instead:

[ -z "$BASH_LOADABLES_PATH" ] &&
  BASH_LOADABLES_PATH=$(pkg-config bash --variable=loadablesdir 2>/dev/null)  
enable -f sleep sleep

(The man page implies BASH_LOADABLES_PATH is set automatically, I find this is not the case in the official distribution as of 4.4.12. If and when it is set correctly you need only enable -f filename commandname as required.)

If that's not suitable, the next easiest thing to do is build or obtain sleep from GNU coreutils, this supports the required feature. The POSIX sleep command is minimal, older Solaris versions implemented only that. Solaris 11 sleep does support fractional seconds.

As a last resort you could use perl (or any other scripting that you have to hand) with the caveat that initialising the interpreter may be comparable to the intended sleep time:

$ perl -e "select(undef,undef,undef,0.1);"
$ echo "after 100" | tclsh
2 of 8
182

The documentation for the sleep command from coreutils says:

Historical implementations of sleep have required that number be an integer, and only accepted a single argument without a suffix. However, GNU sleep accepts arbitrary floating point numbers. See Floating point.

Hence you can use sleep 0.1, sleep 1.0e-1 and similar arguments.

🌐
GitHub
github.com › JuliaLang › julia › issues › 12770
Accuracy and resolution of sleep() on Linux should be improved · Issue #12770 · JuliaLang/julia
July 7, 2015 - On Linux, the resolution of the function sleep() could be much better. The following test script: https://gist.github.com/ufechner7/d264fb714a551d333e6b shows an error of about 1.1 ms on my PC (core i7, 3.4 GHz, no hyper threading, Ubuntu 12.04). (Julia: 0.4.0-dev+6909). The equivalent Python script shows an error of only 0.08 ms. https://gist.github.com/ufechner7/1aa3e96a8a5972864cec · Not only the accuracy should be increased, but also the resolution. This is ...
Published   Aug 23, 2015
Find elsewhere
🌐
Wikipedia
en.wikipedia.org › wiki › Sleep_(command)
sleep (command) - Wikipedia
August 11, 2025 - However, sleep 5.5h (a floating point) is allowed. Consecutive executions of sleep can also be used. ... Sleep 5 hours, then sleep another 30 minutes. The GNU Project's implementation of sleep (part of coreutils) allows the user to pass an arbitrary floating point or multiple arguments, therefore sleep 5h 30m (a space separating hours and minutes is needed) will work on any system which uses GNU sleep, including Linux.
🌐
Linux Man Pages
man7.org › linux › man-pages › man1 › sleep.1.html
sleep(1) - Linux manual page
Pause for NUMBER seconds, where NUMBER is an integer or floating-point. SUFFIX may be 's','m','h', or 'd', for seconds, minutes, hours, days. With multiple arguments, pause for the sum of their values. --help display this help and exit --version output version information and exit · Written by Jim Meyering and Paul Eggert. GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/>
🌐
InformIT
informit.com › articles › article.aspx
nanosleep: High-Precision Sleeping | Linux System Calls | InformIT
Instead of sleeping an integral number of seconds, nanosleep takes as its argument a pointer to a struct timespec object, which can express time to nanosecond precision. However, because of the details of how the Linux kernel works, the actual precision provided by nanosleep is 10 milliseconds—...
🌐
nixCraft
cyberciti.biz › nixcraft › howto › bash shell › what does the sleep command do in linux?
What does the sleep command do in Linux? - nixCraft
December 13, 2022 - /bin/sleep is Linux or Unix command to delay for a specified amount of time. You can suspend the calling shell script for a specified time. For example, pause for 10 seconds or stop execution for 2 mintues. In other words, the sleep command pauses the execution on the next shell command for a given time. GNU version of sleep command supports additional options
Top answer
1 of 8
47

The "update" to question shows some misunderstanding of how modern OSs work.

The kernel is not "allowed" a time slice. The kernel is the thing that gives out time slices to user processes. The "timer" is not set to wake the sleeping process up - it is set to stop the currently running process.

In essence, the kernel attempts to fairly distribute the CPU time by stopping processes that are on CPU too long. For a simplified picture, let's say that no process is allowed to use the CPU more than 2 milliseconds. So, the kernel would set timer to 2 milliseconds, and let the process run. When the timer fires an interrupt, the kernel gets control. It saves the running process' current state (registers, instruction pointer and so on), and the control is not returned to it. Instead, another process is picked from the list of processes waiting to be given CPU, and the process that was interrupted goes to the back of the queue.

The sleeping process is simply not in the queue of things waiting for CPU. Instead, it's stored in the sleeping queue. Whenever kernel gets timer interrupt, the sleep queue is checked, and the processes whose time have come get transferred to "waiting for CPU" queue.

This is, of course, a gross simplification. It takes very sophisticated algorithms to ensure security, fairness, balance, prioritize, prevent starvation, do it all fast and with minimum amount of memory used for kernel data.

2 of 8
36

There's a kernel data structure called the sleep queue. It's a priority queue. Whenever a process is added to the sleep queue, the expiration time of the most-soon-to-be-awakened process is calculated, and a timer is set. At that time, the expired job is taken off the queue and the process resumes execution.

(amusing trivia: in older unix implementations, there was a queue for processes for which fork() had been called, but for which the child process had not been created. It was of course called the fork queue.)

HTH!

Top answer
1 of 2
4

The behaviour is certainly related to your hypervisor.

time(7) says:

Real time is defined as time measured from some fixed point, either from a standard point in the past (see the description of the Epoch and calendar time below), or from some point (e.g., the start) in the life of a process (elapsed time).

Process time is defined as the amount of CPU time used by a process. This is sometimes divided into user and system components. User CPU time is the time spent executing code in user mode. System CPU time is the time spent by the kernel executing in system mode on behalf of the process (e.g., executing system calls). The time(1) command can be used to determine the amount of CPU time consumed during the execution of a program.

Based on this, we can conclude that when we write:

$ time sleep 1

real    0m1.002s
user    0m0.002s
sys     0m0.000s

real is the real time, meaning the actual time (sometimes called wall clock time) spent in the process. user is the CPU time (CPU cycles * frequency) spent executing code in user mode and sys is the CPU time (CPU cycles * frequency) spent by the kernel executing in system mode on behalf of the process.

To paraphrase your problem:

Why doesn't real time reported by time(1) match my watch?

When you run an OS on bare metal, you'll usually have a battery-powered crystal oscillator which runs at a constant frequency. This-hardware clock will keep track of the time since the epoch. The number of oscillations per second can be tuned to correct for drift (see hwclock(8)).

time(7) also says:

The accuracy of various system calls that set timeouts, (e.g., select(2), sigtimedwait(2)) and measure CPU time (e.g., getrusage(2)) is limited by the resolution of the software clock, a clock maintained by the kernel which measures time in jiffies. The size of a jiffy is determined by the value of the kernel constant HZ.

The hardware clock is used to initialize the system clock (which would otherwise only know the time since boot). I suspect your hypervisor (virtualbox) uses some hwclock to initialize the time. After that, the software clock takes over.

rtc(4) says:

[hardware clocks] should not be confused with the system clock, which is a software clock maintained by the kernel and used to implement gettimeofday(2) and time(2), as well as setting timestamps on files, and so on.

What we just learned here is that time(2) (which is the library calls used by the utility time(1)) actually gets info from the system clock, not the hardware clock.

The software clock is maintained by the kernel which measures time in jiffies. This is a unit of time determined by a kernel constant. As far as I understand it, a certain number of CPU cycles will increment one jiffie. So if the OS thinks the CPU is running at 2.0 GHz, but the CPU is actually running at 1.0GHz, then one jiffie would actually take 2ms when compared to a wall clock instead of the expected 1ms.

When running with physical hardware, we tell the CPU how fast we want it to run (slower for power-saving, faster for performance), then we assume that the hardware does what it promised because physical hardware does do that. The trick is that when the "hardware" is virtual, then the hypervisor decides how to control the virtual CPU, not the laws of physics.

A hypervisor running in userspace (like virtual-box) will be at the mercy of the host kernel to give it the cycles it needs. If the host system is running 1000 virtual machines, you can imagine that each guest VM will only get a portion of the CPU cycles it was expecting, causing the guess system clocks to increment at a slower rate. Even if a hypervisor gets all of the resources it needs, it can also choose to throttle the resources as it sees fit, leaving the guest-OS to run slower than it expects without understanding why.

2 of 2
1

Found this answer in Clock drift in a VirtualBox guest:

In Virtualbox Manager, changing the Paravirtualization value (System settings --> Acceleration tab) from Default to Minimal corrected the problem.

🌐
GitHub
github.com › coreutils › coreutils › blob › master › src › sleep.c
coreutils/src/sleep.c at master · coreutils/coreutils
*/ static bool apply_suffix (double *x, char suffix_char) { int multiplier; switch (suffix_char) { case 0: case 's': multiplier = 1; break; case 'm': multiplier = 60; break; case 'h': multiplier = 60 * 60; break; case 'd': multiplier = 60 * 60 * 24; break; default: return false; } *x = dtimespec_bound (*x * multiplier, 0); return true; } int main (int argc, char **argv) { double seconds = 0.0; bool ok = true; initialize_main (&argc, &argv); set_program_name (argv[0]); setlocale (LC_ALL, ""); bindtextdomain (PACKAGE, LOCALEDIR); textdomain (PACKAGE); atexit (close_stdout); parse_gnu_standard_options_only (argc, argv, PROGRAM_NAME, PACKAGE_NAME, Version, true, usage, AUTHORS, (char const *) nullptr);
Author   coreutils
🌐
Reddit
reddit.com › r/commandline › pop quiz: what does `sleep 1 2 3` do?
r/commandline on Reddit: Pop quiz: what does `sleep 1 2 3` do?
September 3, 2014 -

Here are the results across a few different systems (no shell built-ins though):

sysadment@freebsd10 $ (date +%s ; /bin/sleep 1 2 3 ; date +%s)
1438748148
usage: sleep seconds
1438748148

sysadment@macosx $ (date +%s ; /bin/sleep 1 2 3 ; date +%s)
1438748171
usage: sleep seconds
1438748171

sysadment@debian7 $ (date +%s ; /bin/sleep 1 2 3 ; date +%s)
1438748200
1438748206

Yeah, I didn't expect that last one either. GNU coreutils sleep, when given two or more arguments, sleeps for a time equal to the sum of the argument values. TIL

🌐
Linux Man Pages
linux.die.net › man › 3 › sleep
sleep(3): sleep for specified number of seconds - Linux man page
sleep() makes the calling thread sleep until seconds seconds have elapsed or a signal arrives which is not ignored.