You wouldn't with this code, since accurately measuring the time that code takes to execute is a difficult task.
To get to the question posed by your question title (you should really ask one question at a time...) the accuracy of said functions is dictated by the operating system. On Linux, the system clock granularity is 10ms, so timed process suspension via nanosleep() is only guaranteed to be accurate to 10ms, and even then it's not guaranteed to sleep for exactly the time you specify. (See below.)
On Windows, the clock granularity can be changed to accommodate power management needs (e.g. decrease the granularity to conserve battery power). See MSDN's documentation on the Sleep function.
Note that with Sleep()/nanosleep(), the OS only guarantees that the process suspension will last for at least as long as you specify. The execution of other processes can always delay resumption of your process.
Therefore, the key-up event sent by your code above will be sent at least 2.638 seconds later than the key-down event, and not a millisecond sooner. But it would be possible for the event to be sent 2.7, 2.8, or even 3 seconds later. (Or much later if a realtime process grabbed hold of the CPU and didn't relinquish control for some time.)
Answer from cdhowie on Stack OverflowYou wouldn't with this code, since accurately measuring the time that code takes to execute is a difficult task.
To get to the question posed by your question title (you should really ask one question at a time...) the accuracy of said functions is dictated by the operating system. On Linux, the system clock granularity is 10ms, so timed process suspension via nanosleep() is only guaranteed to be accurate to 10ms, and even then it's not guaranteed to sleep for exactly the time you specify. (See below.)
On Windows, the clock granularity can be changed to accommodate power management needs (e.g. decrease the granularity to conserve battery power). See MSDN's documentation on the Sleep function.
Note that with Sleep()/nanosleep(), the OS only guarantees that the process suspension will last for at least as long as you specify. The execution of other processes can always delay resumption of your process.
Therefore, the key-up event sent by your code above will be sent at least 2.638 seconds later than the key-down event, and not a millisecond sooner. But it would be possible for the event to be sent 2.7, 2.8, or even 3 seconds later. (Or much later if a realtime process grabbed hold of the CPU and didn't relinquish control for some time.)
Sleep works in terms of the standard Windows thread scheduling. It is accurate up to about 20-50 milliseconds.
So that it's ok for user experience-dependent things. However it's absolutely inappropriate for real-time things.
Beside of this, there're much better ways to simulate keyboard/mouse events. Please see SendInput.
I have the source for the sleep command from coreutils in front of me but I do not quite understand what is actually making sleep sleep. I see some stuff about errors but nothing that I pick out to be the piece that is actually doing the work of the program.
I am looking at Coreutils version 9.0 from gnu/software/coreutils index.
Also I'm sure the source is pretty self-explanatory but I'm not super fluent in C yet. It's much like translating from Spanish to English.
Bash has a "loadable" sleep which supports fractional seconds, and eliminates overheads of an external command:
$ cd bash-3.2.48/examples/loadables
$ make sleep && mv sleep sleep.so
$ enable -f sleep.so sleep
Then:
$ which sleep
/usr/bin/sleep
$ builtin sleep
sleep: usage: sleep seconds[.fraction]
$ time (for f in `seq 1 10`; do builtin sleep 0.1; done)
real 0m1.000s
user 0m0.004s
sys 0m0.004s
The downside is that the loadables may not be provided with your bash binary, so you would need to compile them yourself as shown (though on Solaris it would not necessarily be as simple as above).
As of bash-4.4 (September 2016) all the loadables are now built and installed by default on platforms that support it, though they are built as separate shared-object files, and without a .so suffix. Unless your distro/OS has done something creative (sadly RHEL/CentOS 8 build bash-4.4 with loadable extensions deliberately removed), you should be able to do instead:
[ -z "$BASH_LOADABLES_PATH" ] &&
BASH_LOADABLES_PATH=$(pkg-config bash --variable=loadablesdir 2>/dev/null)
enable -f sleep sleep
(The man page implies BASH_LOADABLES_PATH is set automatically, I find this is not the case in the official distribution as of 4.4.12. If and when it is set correctly you need only enable -f filename commandname as required.)
If that's not suitable, the next easiest thing to do is build or obtain sleep from GNU coreutils, this supports the required feature. The POSIX sleep command is minimal, older Solaris versions implemented only that. Solaris 11 sleep does support fractional seconds.
As a last resort you could use perl (or any other scripting that you have to hand) with the caveat that initialising the interpreter may be comparable to the intended sleep time:
$ perl -e "select(undef,undef,undef,0.1);"
$ echo "after 100" | tclsh
The documentation for the sleep command from coreutils says:
Historical implementations of sleep have required that number be an integer, and only accepted a single argument without a suffix. However, GNU sleep accepts arbitrary floating point numbers. See Floating point.
Hence you can use sleep 0.1, sleep 1.0e-1 and similar arguments.
The "update" to question shows some misunderstanding of how modern OSs work.
The kernel is not "allowed" a time slice. The kernel is the thing that gives out time slices to user processes. The "timer" is not set to wake the sleeping process up - it is set to stop the currently running process.
In essence, the kernel attempts to fairly distribute the CPU time by stopping processes that are on CPU too long. For a simplified picture, let's say that no process is allowed to use the CPU more than 2 milliseconds. So, the kernel would set timer to 2 milliseconds, and let the process run. When the timer fires an interrupt, the kernel gets control. It saves the running process' current state (registers, instruction pointer and so on), and the control is not returned to it. Instead, another process is picked from the list of processes waiting to be given CPU, and the process that was interrupted goes to the back of the queue.
The sleeping process is simply not in the queue of things waiting for CPU. Instead, it's stored in the sleeping queue. Whenever kernel gets timer interrupt, the sleep queue is checked, and the processes whose time have come get transferred to "waiting for CPU" queue.
This is, of course, a gross simplification. It takes very sophisticated algorithms to ensure security, fairness, balance, prioritize, prevent starvation, do it all fast and with minimum amount of memory used for kernel data.
There's a kernel data structure called the sleep queue. It's a priority queue. Whenever a process is added to the sleep queue, the expiration time of the most-soon-to-be-awakened process is calculated, and a timer is set. At that time, the expired job is taken off the queue and the process resumes execution.
(amusing trivia: in older unix implementations, there was a queue for processes for which fork() had been called, but for which the child process had not been created. It was of course called the fork queue.)
HTH!
The behaviour is certainly related to your hypervisor.
time(7) says:
Real time is defined as time measured from some fixed point, either from a standard point in the past (see the description of the Epoch and calendar time below), or from some point (e.g., the start) in the life of a process (elapsed time).
Process time is defined as the amount of CPU time used by a process. This is sometimes divided into user and system components. User CPU time is the time spent executing code in user mode. System CPU time is the time spent by the kernel executing in system mode on behalf of the process (e.g., executing system calls). The time(1) command can be used to determine the amount of CPU time consumed during the execution of a program.
Based on this, we can conclude that when we write:
$ time sleep 1
real 0m1.002s
user 0m0.002s
sys 0m0.000s
real is the real time, meaning the actual time (sometimes called wall clock time) spent in the process. user is the CPU time (CPU cycles * frequency) spent executing code in user mode and sys is the CPU time (CPU cycles * frequency) spent by the kernel executing in system mode on behalf of the process.
To paraphrase your problem:
Why doesn't
realtime reported bytime(1)match my watch?
When you run an OS on bare metal, you'll usually have a battery-powered crystal oscillator which runs at a constant frequency. This-hardware clock will keep track of the time since the epoch. The number of oscillations per second can be tuned to correct for drift (see hwclock(8)).
time(7) also says:
The accuracy of various system calls that set timeouts, (e.g., select(2), sigtimedwait(2)) and measure CPU time (e.g., getrusage(2)) is limited by the resolution of the software clock, a clock maintained by the kernel which measures time in jiffies. The size of a jiffy is determined by the value of the kernel constant HZ.
The hardware clock is used to initialize the system clock (which would otherwise only know the time since boot). I suspect your hypervisor (virtualbox) uses some hwclock to initialize the time. After that, the software clock takes over.
rtc(4) says:
[hardware clocks] should not be confused with the system clock, which is a software clock maintained by the kernel and used to implement gettimeofday(2) and time(2), as well as setting timestamps on files, and so on.
What we just learned here is that time(2) (which is the library calls used by the utility time(1)) actually gets info from the system clock, not the hardware clock.
The software clock is maintained by the kernel which measures time in jiffies. This is a unit of time determined by a kernel constant. As far as I understand it, a certain number of CPU cycles will increment one jiffie. So if the OS thinks the CPU is running at 2.0 GHz, but the CPU is actually running at 1.0GHz, then one jiffie would actually take 2ms when compared to a wall clock instead of the expected 1ms.
When running with physical hardware, we tell the CPU how fast we want it to run (slower for power-saving, faster for performance), then we assume that the hardware does what it promised because physical hardware does do that. The trick is that when the "hardware" is virtual, then the hypervisor decides how to control the virtual CPU, not the laws of physics.
A hypervisor running in userspace (like virtual-box) will be at the mercy of the host kernel to give it the cycles it needs. If the host system is running 1000 virtual machines, you can imagine that each guest VM will only get a portion of the CPU cycles it was expecting, causing the guess system clocks to increment at a slower rate. Even if a hypervisor gets all of the resources it needs, it can also choose to throttle the resources as it sees fit, leaving the guest-OS to run slower than it expects without understanding why.
Found this answer in Clock drift in a VirtualBox guest:
In Virtualbox Manager, changing the Paravirtualization value (System settings --> Acceleration tab) from Default to Minimal corrected the problem.
Here are the results across a few different systems (no shell built-ins though):
sysadment@freebsd10 $ (date +%s ; /bin/sleep 1 2 3 ; date +%s) 1438748148 usage: sleep seconds 1438748148 sysadment@macosx $ (date +%s ; /bin/sleep 1 2 3 ; date +%s) 1438748171 usage: sleep seconds 1438748171 sysadment@debian7 $ (date +%s ; /bin/sleep 1 2 3 ; date +%s) 1438748200 1438748206
Yeah, I didn't expect that last one either. GNU coreutils sleep, when given two or more arguments, sleeps for a time equal to the sum of the argument values. TIL
In addition to seconds, sleep(1) in GNU coreutils can also sleep for minutes, hours, or days, given the appropriate suffix:
sleep 1m # sleeps for one minute
sleep 3h # sleeps for three hours
sleep 2d # sleeps for two days
Given multiple arguments, you can mix and match these suffixes to sleep for some arbitrary amount of time; e.g. to sleep for 2h hours, 47 minutes, and 11 seconds, you simply do this:
sleep 2h 47m 11s
Hence the feature to sleep for the sum of the arguments. With non-GNU sleep, you do an awkward dance where you convert everything to seconds and sum it, or run multiple commands, e.g.
sleep 7200; sleep 2820; sleep 11 # 2h, 47m, 11s
sleep 10031 # sum of above
FWIW; You could use time, instead of squeezing stuff between dates. e.g. time /bin/sleep 1 2 3.