You wouldn't with this code, since accurately measuring the time that code takes to execute is a difficult task.
To get to the question posed by your question title (you should really ask one question at a time...) the accuracy of said functions is dictated by the operating system. On Linux, the system clock granularity is 10ms, so timed process suspension via nanosleep() is only guaranteed to be accurate to 10ms, and even then it's not guaranteed to sleep for exactly the time you specify. (See below.)
On Windows, the clock granularity can be changed to accommodate power management needs (e.g. decrease the granularity to conserve battery power). See MSDN's documentation on the Sleep function.
Note that with Sleep()/nanosleep(), the OS only guarantees that the process suspension will last for at least as long as you specify. The execution of other processes can always delay resumption of your process.
Therefore, the key-up event sent by your code above will be sent at least 2.638 seconds later than the key-down event, and not a millisecond sooner. But it would be possible for the event to be sent 2.7, 2.8, or even 3 seconds later. (Or much later if a realtime process grabbed hold of the CPU and didn't relinquish control for some time.)
Answer from cdhowie on Stack OverflowYou wouldn't with this code, since accurately measuring the time that code takes to execute is a difficult task.
To get to the question posed by your question title (you should really ask one question at a time...) the accuracy of said functions is dictated by the operating system. On Linux, the system clock granularity is 10ms, so timed process suspension via nanosleep() is only guaranteed to be accurate to 10ms, and even then it's not guaranteed to sleep for exactly the time you specify. (See below.)
On Windows, the clock granularity can be changed to accommodate power management needs (e.g. decrease the granularity to conserve battery power). See MSDN's documentation on the Sleep function.
Note that with Sleep()/nanosleep(), the OS only guarantees that the process suspension will last for at least as long as you specify. The execution of other processes can always delay resumption of your process.
Therefore, the key-up event sent by your code above will be sent at least 2.638 seconds later than the key-down event, and not a millisecond sooner. But it would be possible for the event to be sent 2.7, 2.8, or even 3 seconds later. (Or much later if a realtime process grabbed hold of the CPU and didn't relinquish control for some time.)
Sleep works in terms of the standard Windows thread scheduling. It is accurate up to about 20-50 milliseconds.
So that it's ok for user experience-dependent things. However it's absolutely inappropriate for real-time things.
Beside of this, there're much better ways to simulate keyboard/mouse events. Please see SendInput.
I have the source for the sleep command from coreutils in front of me but I do not quite understand what is actually making sleep sleep. I see some stuff about errors but nothing that I pick out to be the piece that is actually doing the work of the program.
I am looking at Coreutils version 9.0 from gnu/software/coreutils index.
Also I'm sure the source is pretty self-explanatory but I'm not super fluent in C yet. It's much like translating from Spanish to English.
Bash has a "loadable" sleep which supports fractional seconds, and eliminates overheads of an external command:
$ cd bash-3.2.48/examples/loadables
$ make sleep && mv sleep sleep.so
$ enable -f sleep.so sleep
Then:
$ which sleep
/usr/bin/sleep
$ builtin sleep
sleep: usage: sleep seconds[.fraction]
$ time (for f in `seq 1 10`; do builtin sleep 0.1; done)
real 0m1.000s
user 0m0.004s
sys 0m0.004s
The downside is that the loadables may not be provided with your bash binary, so you would need to compile them yourself as shown (though on Solaris it would not necessarily be as simple as above).
As of bash-4.4 (September 2016) all the loadables are now built and installed by default on platforms that support it, though they are built as separate shared-object files, and without a .so suffix. Unless your distro/OS has done something creative (sadly RHEL/CentOS 8 build bash-4.4 with loadable extensions deliberately removed), you should be able to do instead:
[ -z "$BASH_LOADABLES_PATH" ] &&
BASH_LOADABLES_PATH=$(pkg-config bash --variable=loadablesdir 2>/dev/null)
enable -f sleep sleep
(The man page implies BASH_LOADABLES_PATH is set automatically, I find this is not the case in the official distribution as of 4.4.12. If and when it is set correctly you need only enable -f filename commandname as required.)
If that's not suitable, the next easiest thing to do is build or obtain sleep from GNU coreutils, this supports the required feature. The POSIX sleep command is minimal, older Solaris versions implemented only that. Solaris 11 sleep does support fractional seconds.
As a last resort you could use perl (or any other scripting that you have to hand) with the caveat that initialising the interpreter may be comparable to the intended sleep time:
$ perl -e "select(undef,undef,undef,0.1);"
$ echo "after 100" | tclsh
The documentation for the sleep command from coreutils says:
Historical implementations of sleep have required that number be an integer, and only accepted a single argument without a suffix. However, GNU sleep accepts arbitrary floating point numbers. See Floating point.
Hence you can use sleep 0.1, sleep 1.0e-1 and similar arguments.
The "update" to question shows some misunderstanding of how modern OSs work.
The kernel is not "allowed" a time slice. The kernel is the thing that gives out time slices to user processes. The "timer" is not set to wake the sleeping process up - it is set to stop the currently running process.
In essence, the kernel attempts to fairly distribute the CPU time by stopping processes that are on CPU too long. For a simplified picture, let's say that no process is allowed to use the CPU more than 2 milliseconds. So, the kernel would set timer to 2 milliseconds, and let the process run. When the timer fires an interrupt, the kernel gets control. It saves the running process' current state (registers, instruction pointer and so on), and the control is not returned to it. Instead, another process is picked from the list of processes waiting to be given CPU, and the process that was interrupted goes to the back of the queue.
The sleeping process is simply not in the queue of things waiting for CPU. Instead, it's stored in the sleeping queue. Whenever kernel gets timer interrupt, the sleep queue is checked, and the processes whose time have come get transferred to "waiting for CPU" queue.
This is, of course, a gross simplification. It takes very sophisticated algorithms to ensure security, fairness, balance, prioritize, prevent starvation, do it all fast and with minimum amount of memory used for kernel data.
There's a kernel data structure called the sleep queue. It's a priority queue. Whenever a process is added to the sleep queue, the expiration time of the most-soon-to-be-awakened process is calculated, and a timer is set. At that time, the expired job is taken off the queue and the process resumes execution.
(amusing trivia: in older unix implementations, there was a queue for processes for which fork() had been called, but for which the child process had not been created. It was of course called the fork queue.)
HTH!
sleep infinity, if implemented, will either sleep forever or sleep for the maximum sleep length, depending on the implementation. (see other answers and comments for this question that mention some of the variations)
tail does not block
As always: For everything there is an answer which is short, easy to understand, easy to follow and completely wrong. Here tail -f /dev/null falls into this category ;)
If you look at it with strace tail -f /dev/null, you will notice that this solution is far from blocking! It's probably even worse than the sleep solution in the question, as it uses (under Linux) precious resources like the inotify system. Also other processes which write to /dev/null make tail loop. (On my Ubuntu64 16.10 this adds several 10 syscalls per second on an already busy system.)
The question was for a blocking command
Unfortunately, there is no such thing...
Read: I do not know any way to achieve this with the shell directly.
Everything (even sleep infinity) can be interrupted by some signal. So if you want to be really sure it does not exceptionally return, it must run in a loop, like you already did for your sleep. Please note, that (on Linux) /bin/sleep apparently is capped at 24 days (have a look at strace sleep infinity), hence the best you can do is probably:
while :; do sleep 2073600; done
(Note that I believe sleep loops internally for higher values than 24 days, but this means: It is not blocking, it is very slowly looping. So why not move this loop to the outside?)
...but you can come quite near with an unnamed fifo
You can create something which really blocks as long as there are no signals sent to the process. Following uses bash 4, 2 PIDs and 1 fifo:
bash -c 'coproc { exec >&-; read; }; eval exec "${COPROC[0]}<&-"; wait'
You can check that this really blocks with strace if you like:
strace -ff bash -c '..see above..'
How this was constructed
read blocks if there is no input data (see some other answers). However, the tty (aka. stdin) usually is not a good source, as it is closed when the user logs out. Also it might steal some input from the tty. Not nice.
To make read block, we need to wait for something like a fifo which will never return anything. In bash 4 there is a command which can provide us with exactly such a fifo: coproc. If we also wait the blocking read (which is our coproc), we are done. Sadly this needs to keep open two PIDs and a fifo.
Variant with a named fifo
If you do not bother using a named fifo, you can do this as follows:
mkfifo "$HOME/.pause.fifo" 2>/dev/null; read <"$HOME/.pause.fifo"
Not using a loop on the read is a bit sloppy, but you can reuse this fifo as often as you like and make the reads terminate using touch "$HOME/.pause.fifo" (if there is more than a single read waiting, all are terminated at once).
Or use the Linux pause() syscall
For the infinite blocking, there is a Linux system call named pause() which does what we want: Wait forever (until a signal arrives). However there is no userspace program for this (yet).
C
Creating such a program is easy. Here is a snippet to create a very small Linux program called pause which pauses indefinitely (needs a C compiler such as gcc, and uses diet etc. to produce a small binary):
printf '#include <unistd.h>\nint main(){for(;;)pause();}' > pause.c;
diet -Os cc pause.c -o pause;
strip -s pause;
ls -al pause
python
If you do not want to compile something yourself, but you have python installed, you can use this under Linux:
python -c 'while 1: import ctypes; ctypes.CDLL(None).pause()'
(Note: Use exec python -c ... to replace the current shell, this frees one PID. The solution can be improved with some IO redirection as well, freeing unused FDs. This is up to you.)
How this works: ctypes.CDLL(None) loads the "main program" (including the C library) and runs the pause() function from it, all within a loop. Less efficient than the C version, but works.
My recommendation for you:
Stay at the looping sleep. It's easy to understand, very portable, and blocks for most of the time.
The behaviour is certainly related to your hypervisor.
time(7) says:
Real time is defined as time measured from some fixed point, either from a standard point in the past (see the description of the Epoch and calendar time below), or from some point (e.g., the start) in the life of a process (elapsed time).
Process time is defined as the amount of CPU time used by a process. This is sometimes divided into user and system components. User CPU time is the time spent executing code in user mode. System CPU time is the time spent by the kernel executing in system mode on behalf of the process (e.g., executing system calls). The time(1) command can be used to determine the amount of CPU time consumed during the execution of a program.
Based on this, we can conclude that when we write:
$ time sleep 1
real 0m1.002s
user 0m0.002s
sys 0m0.000s
real is the real time, meaning the actual time (sometimes called wall clock time) spent in the process. user is the CPU time (CPU cycles * frequency) spent executing code in user mode and sys is the CPU time (CPU cycles * frequency) spent by the kernel executing in system mode on behalf of the process.
To paraphrase your problem:
Why doesn't
realtime reported bytime(1)match my watch?
When you run an OS on bare metal, you'll usually have a battery-powered crystal oscillator which runs at a constant frequency. This-hardware clock will keep track of the time since the epoch. The number of oscillations per second can be tuned to correct for drift (see hwclock(8)).
time(7) also says:
The accuracy of various system calls that set timeouts, (e.g., select(2), sigtimedwait(2)) and measure CPU time (e.g., getrusage(2)) is limited by the resolution of the software clock, a clock maintained by the kernel which measures time in jiffies. The size of a jiffy is determined by the value of the kernel constant HZ.
The hardware clock is used to initialize the system clock (which would otherwise only know the time since boot). I suspect your hypervisor (virtualbox) uses some hwclock to initialize the time. After that, the software clock takes over.
rtc(4) says:
[hardware clocks] should not be confused with the system clock, which is a software clock maintained by the kernel and used to implement gettimeofday(2) and time(2), as well as setting timestamps on files, and so on.
What we just learned here is that time(2) (which is the library calls used by the utility time(1)) actually gets info from the system clock, not the hardware clock.
The software clock is maintained by the kernel which measures time in jiffies. This is a unit of time determined by a kernel constant. As far as I understand it, a certain number of CPU cycles will increment one jiffie. So if the OS thinks the CPU is running at 2.0 GHz, but the CPU is actually running at 1.0GHz, then one jiffie would actually take 2ms when compared to a wall clock instead of the expected 1ms.
When running with physical hardware, we tell the CPU how fast we want it to run (slower for power-saving, faster for performance), then we assume that the hardware does what it promised because physical hardware does do that. The trick is that when the "hardware" is virtual, then the hypervisor decides how to control the virtual CPU, not the laws of physics.
A hypervisor running in userspace (like virtual-box) will be at the mercy of the host kernel to give it the cycles it needs. If the host system is running 1000 virtual machines, you can imagine that each guest VM will only get a portion of the CPU cycles it was expecting, causing the guess system clocks to increment at a slower rate. Even if a hypervisor gets all of the resources it needs, it can also choose to throttle the resources as it sees fit, leaving the guest-OS to run slower than it expects without understanding why.
Found this answer in Clock drift in a VirtualBox guest:
In Virtualbox Manager, changing the Paravirtualization value (System settings --> Acceleration tab) from Default to Minimal corrected the problem.