You wouldn't with this code, since accurately measuring the time that code takes to execute is a difficult task.

To get to the question posed by your question title (you should really ask one question at a time...) the accuracy of said functions is dictated by the operating system. On Linux, the system clock granularity is 10ms, so timed process suspension via nanosleep() is only guaranteed to be accurate to 10ms, and even then it's not guaranteed to sleep for exactly the time you specify. (See below.)

On Windows, the clock granularity can be changed to accommodate power management needs (e.g. decrease the granularity to conserve battery power). See MSDN's documentation on the Sleep function.

Note that with Sleep()/nanosleep(), the OS only guarantees that the process suspension will last for at least as long as you specify. The execution of other processes can always delay resumption of your process.

Therefore, the key-up event sent by your code above will be sent at least 2.638 seconds later than the key-down event, and not a millisecond sooner. But it would be possible for the event to be sent 2.7, 2.8, or even 3 seconds later. (Or much later if a realtime process grabbed hold of the CPU and didn't relinquish control for some time.)

Answer from cdhowie on Stack Overflow
🌐
GNU
gnu.org β€Ί software β€Ί libc β€Ί manual β€Ί html_node β€Ί Sleeping.html
Sleeping (The GNU C Library)
The sleep function is declared in unistd.h. Resist the temptation to implement a sleep for a fixed amount of time by using the return value of sleep, when nonzero, to call sleep again. This will work with a certain amount of accuracy as long as signals arrive infrequently.
🌐
Stack Overflow
stackoverflow.com β€Ί questions β€Ί 4128766 β€Ί how-accurate-is-sleep-or-sleep
c - How accurate is Sleep() or sleep() - Stack Overflow

You wouldn't with this code, since accurately measuring the time that code takes to execute is a difficult task.

To get to the question posed by your question title (you should really ask one question at a time...) the accuracy of said functions is dictated by the operating system. On Linux, the system clock granularity is 10ms, so timed process suspension via nanosleep() is only guaranteed to be accurate to 10ms, and even then it's not guaranteed to sleep for exactly the time you specify. (See below.)

On Windows, the clock granularity can be changed to accommodate power management needs (e.g. decrease the granularity to conserve battery power). See MSDN's documentation on the Sleep function.

Note that with Sleep()/nanosleep(), the OS only guarantees that the process suspension will last for at least as long as you specify. The execution of other processes can always delay resumption of your process.

Therefore, the key-up event sent by your code above will be sent at least 2.638 seconds later than the key-down event, and not a millisecond sooner. But it would be possible for the event to be sent 2.7, 2.8, or even 3 seconds later. (Or much later if a realtime process grabbed hold of the CPU and didn't relinquish control for some time.)

Answer from cdhowie on stackoverflow.com
🌐
Linux Man Pages
man7.org β€Ί linux β€Ί man-pages β€Ί man3 β€Ί sleep.3.html
sleep(3) - Linux manual page
If you discover any rendering problems ... or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Linux man-pages 6.15 2025-05-17 sleep(3)...
🌐
GNU
gnu.org β€Ί software β€Ί coreutils β€Ί sleep
sleep invocation (GNU Coreutils 9.7)
sleep pauses for an amount of time specified by the sum of the values of the command line arguments. Synopsis: ... Each argument is a non-negative number followed by an optional unit; the default is seconds.
🌐
Reddit
reddit.com β€Ί r/c_programming β€Ί explain how the sleep command works.
r/C_Programming on Reddit: Explain how the sleep command works.

sleep() is a system call. It basically requests that the operating system remove the program from the list of programs that are eligible to run, and to put it back on the list after a certain amount of time has passed. If there are other programs eligible to run, they now get their shot. Otherwise, the operating system just twiddles its thumbs until some program is eligible to run. One thing to beware of: just because your program got put back on the list of programs eligible to run doesn't mean that it starts running right away. If there are other programs ahead of it in the list, it could be a while. For this reason the sleep() function really sleeps for at least as long as you requested. It could be longer. This is not a real-time environment. One final issue: if your program should receive a signal while sleeping, the operating system will put it back on the eligible list right away, and the sleep() function will fail with EINTR.

🌐
Stack Overflow
stackoverflow.com β€Ί questions β€Ί 1157209 β€Ί is-there-an-alternative-sleep-function-in-c-to-milliseconds
linux - Is there an alternative sleep function in C to milliseconds? - Stack Overflow

Yes - older POSIX standards defined usleep(), so this is available on Linux:

int usleep(useconds_t usec);

DESCRIPTION

The usleep() function suspends execution of the calling thread for (at least) usec microseconds. The sleep may be lengthened slightly by any system activity or by the time spent processing the call or by the granularity of system timers.

usleep() takes microseconds, so you will have to multiply the input by 1000 in order to sleep in milliseconds.


usleep() has since been deprecated and subsequently removed from POSIX; for new code, nanosleep() is preferred:

#include <time.h>

int nanosleep(const struct timespec *req, struct timespec *rem);

DESCRIPTION

nanosleep() suspends the execution of the calling thread until either at least the time specified in *req has elapsed, or the delivery of a signal that triggers the invocation of a handler in the calling thread or that terminates the process.

The structure timespec is used to specify intervals of time with nanosecond precision. It is defined as follows:

struct timespec {
    time_t tv_sec;        /* seconds */
    long   tv_nsec;       /* nanoseconds */
};

An example msleep() function implemented using nanosleep(), continuing the sleep if it is interrupted by a signal:

#include <time.h>
#include <errno.h>    

/* msleep(): Sleep for the requested number of milliseconds. */
int msleep(long msec)
{
    struct timespec ts;
    int res;

    if (msec < 0)
    {
        errno = EINVAL;
        return -1;
    }

    ts.tv_sec = msec / 1000;
    ts.tv_nsec = (msec % 1000) * 1000000;

    do {
        res = nanosleep(&ts, &ts);
    } while (res && errno == EINTR);

    return res;
}
Answer from caf on stackoverflow.com
🌐
Server Fault
serverfault.com β€Ί questions β€Ί 469247 β€Ί how-do-i-sleep-for-a-millisecond-in-bash-or-ksh
linux - How do I sleep for a millisecond in bash or ksh - Server Fault

Bash has a "loadable" sleep which supports fractional seconds, and eliminates overheads of an external command:

$ cd bash-3.2.48/examples/loadables
$ make sleep && mv sleep sleep.so
$ enable -f sleep.so sleep

Then:

$ which sleep
/usr/bin/sleep
$ builtin sleep
sleep: usage: sleep seconds[.fraction]
$ time (for f in `seq 1 10`; do builtin sleep 0.1; done)
real    0m1.000s
user    0m0.004s
sys     0m0.004s

The downside is that the loadables may not be provided with your bash binary, so you would need to compile them yourself as shown (though on Solaris it would not necessarily be as simple as above).

As of bash-4.4 (September 2016) all the loadables are now built and installed by default on platforms that support it, though they are built as separate shared-object files, and without a .so suffix. Unless your distro/OS has done something creative (sadly RHEL/CentOS 8 build bash-4.4 with loadable extensions deliberately removed), you should be able to do instead:

[ -z "$BASH_LOADABLES_PATH" ] &&
  BASH_LOADABLES_PATH=$(pkg-config bash --variable=loadablesdir 2>/dev/null)  
enable -f sleep sleep

(The man page implies BASH_LOADABLES_PATH is set automatically, I find this is not the case in the official distribution as of 4.4.12. If and when it is set correctly you need only enable -f filename commandname as required.)

If that's not suitable, the next easiest thing to do is build or obtain sleep from GNU coreutils, this supports the required feature. The POSIX sleep command is minimal, older Solaris versions implemented only that. Solaris 11 sleep does support fractional seconds.

As a last resort you could use perl (or any other scripting that you have to hand) with the caveat that initialising the interpreter may be comparable to the intended sleep time:

$ perl -e "select(undef,undef,undef,0.1);"
$ echo "after 100" | tclsh
Answer from mr.spuratic on serverfault.com
🌐
Stack Overflow
stackoverflow.com β€Ί questions β€Ί 1133857 β€Ί how-accurate-is-pythons-time-sleep
How accurate is python's time.sleep()? - Stack Overflow

The accuracy of the time.sleep function depends on your underlying OS's sleep accuracy. For non-real-time OSs like a stock Windows, the smallest interval you can sleep for is about 10-13ms. I have seen accurate sleeps within several milliseconds of that time when above the minimum 10-13ms.

Update: Like mentioned in the docs cited below, it's common to do the sleep in a loop that will make sure to go back to sleep if it wakes you up early.

I should also mention that if you are running Ubuntu you can try out a pseudo real-time kernel (with the RT_PREEMPT patch set) by installing the rt kernel package (at least in Ubuntu 10.04 LTS).

Non-real-time Linux kernels have minimum sleep intervals much closer to 1ms than 10ms, but it varies in a non-deterministic manner.

Answer from Joseph Lisee on stackoverflow.com
Find elsewhere
🌐
Linux Man Pages
linux.die.net β€Ί man β€Ί 3 β€Ί sleep
sleep(3): sleep for specified number of seconds - Linux man page
sleep() makes the calling thread sleep until seconds seconds have elapsed or a signal arrives which is not ignored.
🌐
Linux Man Pages
man7.org β€Ί linux β€Ί man-pages β€Ί man1 β€Ί sleep.1.html
sleep(1) - Linux manual page
If you discover any rendering problems ... or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.7 April 2025 SLEEP(1)...
🌐
Quora
quora.com β€Ί In-programming-how-is-the-sleep-function-typically-implemented-Can-you-demonstrate-with-code
In programming, how is the 'sleep' function typically implemented? Can you demonstrate with code? - Quora
Answer (1 of 18): An example could be found in the source code for Linux. See: torvalds/linux Sleep.c Problem is, the sleep function is part of the Advanced Configuration and Power Interface or ACPI as the the thread that’s executed just asks the processor to wait for a certain amount of ...
🌐
Google Groups
groups.google.com β€Ί g β€Ί comp.unix.programmer β€Ί c β€Ί WqpQWYkA_NM
GNU sleep's "infinity" parameter
Every so often, in a shell script or whatever, one needs to sleep (pause) "forever". Usually, what you do is something like: sleep <verybignumber> and that is usually sufficient. However, in fact, GNU sleep can be called with a parameter of "infinity", which, it seems, does really sleep "forever".
🌐
The Open Group
pubs.opengroup.org β€Ί onlinepubs β€Ί 009696799 β€Ί functions β€Ί sleep.html
sleep
The Open Group Base Specifications Issue 6 IEEE Std 1003.1, 2004 Edition Copyright Β© 2001-2004 The IEEE and The Open Group, All Rights reserved. ... The sleep() function shall cause the calling thread to be suspended from execution until either the number of realtime seconds specified by the ...
🌐
Wikipedia
en.wikipedia.org β€Ί wiki β€Ί Sleep_(command)
sleep (command) - Wikipedia
August 11, 2025 - Microsoft also provides a sleep resource kit tool for Windows which can be used in batch files or the command prompt to pause the execution and wait for some time. Another native version is the timeout command which is part of current versions of Windows. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU ...
🌐
Quora
quora.com β€Ί What-is-the-best-cross-platform-way-to-have-accurate-sleep-functions-in-C-By-the-way-this_thread-sleep-is-not-precise-enough
What is the best cross-platform way to have accurate sleep functions in C++? By the way, this_thread::sleep is not precise enough. - Quora
Answer (1 of 4): The problem is not that the sleep is inaccurate. The problem is not that some time is being taken between sleeps. The problem is that your computer is not running only this one thread. If you read the documentation for any sleep function, it will say something along the lines ...
🌐
Stack Overflow
stackoverflow.com β€Ί questions β€Ί 175882 β€Ί whats-the-algorithm-behind-sleep
c - What's the algorithm behind sleep()? - Stack Overflow

The "update" to question shows some misunderstanding of how modern OSs work.

The kernel is not "allowed" a time slice. The kernel is the thing that gives out time slices to user processes. The "timer" is not set to wake the sleeping process up - it is set to stop the currently running process.

In essence, the kernel attempts to fairly distribute the CPU time by stopping processes that are on CPU too long. For a simplified picture, let's say that no process is allowed to use the CPU more than 2 milliseconds. So, the kernel would set timer to 2 milliseconds, and let the process run. When the timer fires an interrupt, the kernel gets control. It saves the running process' current state (registers, instruction pointer and so on), and the control is not returned to it. Instead, another process is picked from the list of processes waiting to be given CPU, and the process that was interrupted goes to the back of the queue.

The sleeping process is simply not in the queue of things waiting for CPU. Instead, it's stored in the sleeping queue. Whenever kernel gets timer interrupt, the sleep queue is checked, and the processes whose time have come get transferred to "waiting for CPU" queue.

This is, of course, a gross simplification. It takes very sophisticated algorithms to ensure security, fairness, balance, prioritize, prevent starvation, do it all fast and with minimum amount of memory used for kernel data.

Answer from user3458 on stackoverflow.com
🌐
Stack Overflow
stackoverflow.com β€Ί questions β€Ί 19427034 β€Ί is-sleep-inaccurate
c - Is Sleep() inaccurate? - Stack Overflow

Sleep() is accurate to the operating system's clock interrupt rate. Which by default on Windows ticks 64 times per second. Or once every 15.625 msec, as you found out.

You can increase that rate, call timeBeginPeriod(10). Use timeEndPeriod(10) when you're done. You are still subject to normal thread scheduling latencies so you still don't have a guarantee that your thread will resume running after 10 msec. And won't when the machine is heavily loaded. Using SetThreadPriority() to boost the priority, increasing the odds that it will.

Answer from Hans Passant on stackoverflow.com
🌐
Arch Linux Forums
bbs.archlinux.org β€Ί viewtopic.php
How does sleep command works? / GNU/Linux Discussion / Arch Linux Forums
Doesn't look like a lot of code is involved, but I can't follow it. I'd be interested to hear an explanation! ... sleep from coreutils basically calls nanosleep which is a system call, you can find the source in the kernel sources (link).
🌐
Stack Exchange
unix.stackexchange.com β€Ί questions β€Ί 607217 β€Ί sleep-command-delay-grossly-inaccurate-vm
virtualbox - sleep command delay grossly inaccurate (VM) - Unix & Linux Stack Exchange

The behaviour is certainly related to your hypervisor.

time(7) says:

Real time is defined as time measured from some fixed point, either from a standard point in the past (see the description of the Epoch and calendar time below), or from some point (e.g., the start) in the life of a process (elapsed time).

Process time is defined as the amount of CPU time used by a process. This is sometimes divided into user and system components. User CPU time is the time spent executing code in user mode. System CPU time is the time spent by the kernel executing in system mode on behalf of the process (e.g., executing system calls). The time(1) command can be used to determine the amount of CPU time consumed during the execution of a program.

Based on this, we can conclude that when we write:

$ time sleep 1

real    0m1.002s
user    0m0.002s
sys     0m0.000s

real is the real time, meaning the actual time (sometimes called wall clock time) spent in the process. user is the CPU time (CPU cycles * frequency) spent executing code in user mode and sys is the CPU time (CPU cycles * frequency) spent by the kernel executing in system mode on behalf of the process.

To paraphrase your problem:

Why doesn't real time reported by time(1) match my watch?

When you run an OS on bare metal, you'll usually have a battery-powered crystal oscillator which runs at a constant frequency. This-hardware clock will keep track of the time since the epoch. The number of oscillations per second can be tuned to correct for drift (see hwclock(8)).

time(7) also says:

The accuracy of various system calls that set timeouts, (e.g., select(2), sigtimedwait(2)) and measure CPU time (e.g., getrusage(2)) is limited by the resolution of the software clock, a clock maintained by the kernel which measures time in jiffies. The size of a jiffy is determined by the value of the kernel constant HZ.

The hardware clock is used to initialize the system clock (which would otherwise only know the time since boot). I suspect your hypervisor (virtualbox) uses some hwclock to initialize the time. After that, the software clock takes over.

rtc(4) says:

[hardware clocks] should not be confused with the system clock, which is a software clock maintained by the kernel and used to implement gettimeofday(2) and time(2), as well as setting timestamps on files, and so on.

What we just learned here is that time(2) (which is the library calls used by the utility time(1)) actually gets info from the system clock, not the hardware clock.

The software clock is maintained by the kernel which measures time in jiffies. This is a unit of time determined by a kernel constant. As far as I understand it, a certain number of CPU cycles will increment one jiffie. So if the OS thinks the CPU is running at 2.0 GHz, but the CPU is actually running at 1.0GHz, then one jiffie would actually take 2ms when compared to a wall clock instead of the expected 1ms.

When running with physical hardware, we tell the CPU how fast we want it to run (slower for power-saving, faster for performance), then we assume that the hardware does what it promised because physical hardware does do that. The trick is that when the "hardware" is virtual, then the hypervisor decides how to control the virtual CPU, not the laws of physics.

A hypervisor running in userspace (like virtual-box) will be at the mercy of the host kernel to give it the cycles it needs. If the host system is running 1000 virtual machines, you can imagine that each guest VM will only get a portion of the CPU cycles it was expecting, causing the guess system clocks to increment at a slower rate. Even if a hypervisor gets all of the resources it needs, it can also choose to throttle the resources as it sees fit, leaving the guest-OS to run slower than it expects without understanding why.

Answer from Stewart on unix.stackexchange.com