You wouldn't with this code, since accurately measuring the time that code takes to execute is a difficult task.
To get to the question posed by your question title (you should really ask one question at a time...) the accuracy of said functions is dictated by the operating system. On Linux, the system clock granularity is 10ms, so timed process suspension via nanosleep() is only guaranteed to be accurate to 10ms, and even then it's not guaranteed to sleep for exactly the time you specify. (See below.)
On Windows, the clock granularity can be changed to accommodate power management needs (e.g. decrease the granularity to conserve battery power). See MSDN's documentation on the Sleep function.
Note that with Sleep()/nanosleep(), the OS only guarantees that the process suspension will last for at least as long as you specify. The execution of other processes can always delay resumption of your process.
Therefore, the key-up event sent by your code above will be sent at least 2.638 seconds later than the key-down event, and not a millisecond sooner. But it would be possible for the event to be sent 2.7, 2.8, or even 3 seconds later. (Or much later if a realtime process grabbed hold of the CPU and didn't relinquish control for some time.)
Answer from cdhowie on Stack OverflowYou wouldn't with this code, since accurately measuring the time that code takes to execute is a difficult task.
To get to the question posed by your question title (you should really ask one question at a time...) the accuracy of said functions is dictated by the operating system. On Linux, the system clock granularity is 10ms, so timed process suspension via nanosleep() is only guaranteed to be accurate to 10ms, and even then it's not guaranteed to sleep for exactly the time you specify. (See below.)
On Windows, the clock granularity can be changed to accommodate power management needs (e.g. decrease the granularity to conserve battery power). See MSDN's documentation on the Sleep function.
Note that with Sleep()/nanosleep(), the OS only guarantees that the process suspension will last for at least as long as you specify. The execution of other processes can always delay resumption of your process.
Therefore, the key-up event sent by your code above will be sent at least 2.638 seconds later than the key-down event, and not a millisecond sooner. But it would be possible for the event to be sent 2.7, 2.8, or even 3 seconds later. (Or much later if a realtime process grabbed hold of the CPU and didn't relinquish control for some time.)
Sleep works in terms of the standard Windows thread scheduling. It is accurate up to about 20-50 milliseconds.
So that it's ok for user experience-dependent things. However it's absolutely inappropriate for real-time things.
Beside of this, there're much better ways to simulate keyboard/mouse events. Please see SendInput.
I have the source for the sleep command from coreutils in front of me but I do not quite understand what is actually making sleep sleep. I see some stuff about errors but nothing that I pick out to be the piece that is actually doing the work of the program.
I am looking at Coreutils version 9.0 from gnu/software/coreutils index.
Also I'm sure the source is pretty self-explanatory but I'm not super fluent in C yet. It's much like translating from Spanish to English.
Bash has a "loadable" sleep which supports fractional seconds, and eliminates overheads of an external command:
$ cd bash-3.2.48/examples/loadables
$ make sleep && mv sleep sleep.so
$ enable -f sleep.so sleep
Then:
$ which sleep
/usr/bin/sleep
$ builtin sleep
sleep: usage: sleep seconds[.fraction]
$ time (for f in `seq 1 10`; do builtin sleep 0.1; done)
real 0m1.000s
user 0m0.004s
sys 0m0.004s
The downside is that the loadables may not be provided with your bash binary, so you would need to compile them yourself as shown (though on Solaris it would not necessarily be as simple as above).
As of bash-4.4 (September 2016) all the loadables are now built and installed by default on platforms that support it, though they are built as separate shared-object files, and without a .so suffix. Unless your distro/OS has done something creative (sadly RHEL/CentOS 8 build bash-4.4 with loadable extensions deliberately removed), you should be able to do instead:
[ -z "$BASH_LOADABLES_PATH" ] &&
BASH_LOADABLES_PATH=$(pkg-config bash --variable=loadablesdir 2>/dev/null)
enable -f sleep sleep
(The man page implies BASH_LOADABLES_PATH is set automatically, I find this is not the case in the official distribution as of 4.4.12. If and when it is set correctly you need only enable -f filename commandname as required.)
If that's not suitable, the next easiest thing to do is build or obtain sleep from GNU coreutils, this supports the required feature. The POSIX sleep command is minimal, older Solaris versions implemented only that. Solaris 11 sleep does support fractional seconds.
As a last resort you could use perl (or any other scripting that you have to hand) with the caveat that initialising the interpreter may be comparable to the intended sleep time:
$ perl -e "select(undef,undef,undef,0.1);"
$ echo "after 100" | tclsh
The documentation for the sleep command from coreutils says:
Historical implementations of sleep have required that number be an integer, and only accepted a single argument without a suffix. However, GNU sleep accepts arbitrary floating point numbers. See Floating point.
Hence you can use sleep 0.1, sleep 1.0e-1 and similar arguments.
The "update" to question shows some misunderstanding of how modern OSs work.
The kernel is not "allowed" a time slice. The kernel is the thing that gives out time slices to user processes. The "timer" is not set to wake the sleeping process up - it is set to stop the currently running process.
In essence, the kernel attempts to fairly distribute the CPU time by stopping processes that are on CPU too long. For a simplified picture, let's say that no process is allowed to use the CPU more than 2 milliseconds. So, the kernel would set timer to 2 milliseconds, and let the process run. When the timer fires an interrupt, the kernel gets control. It saves the running process' current state (registers, instruction pointer and so on), and the control is not returned to it. Instead, another process is picked from the list of processes waiting to be given CPU, and the process that was interrupted goes to the back of the queue.
The sleeping process is simply not in the queue of things waiting for CPU. Instead, it's stored in the sleeping queue. Whenever kernel gets timer interrupt, the sleep queue is checked, and the processes whose time have come get transferred to "waiting for CPU" queue.
This is, of course, a gross simplification. It takes very sophisticated algorithms to ensure security, fairness, balance, prioritize, prevent starvation, do it all fast and with minimum amount of memory used for kernel data.
There's a kernel data structure called the sleep queue. It's a priority queue. Whenever a process is added to the sleep queue, the expiration time of the most-soon-to-be-awakened process is calculated, and a timer is set. At that time, the expired job is taken off the queue and the process resumes execution.
(amusing trivia: in older unix implementations, there was a queue for processes for which fork() had been called, but for which the child process had not been created. It was of course called the fork queue.)
HTH!