The behaviour is certainly related to your hypervisor.
time(7)
says:
Real time is defined as time measured from some fixed point, either
from a standard point in the past (see the description of the Epoch
and calendar time below), or from some point (e.g., the start) in the
life of a process (elapsed time).
Process time is defined as the amount of CPU time used by a process.
This is sometimes divided into user and system components. User CPU
time is the time spent executing code in user mode. System CPU time
is the time spent by the kernel executing in system mode on behalf of
the process (e.g., executing system calls). The time(1) command can
be used to determine the amount of CPU time consumed during the
execution of a program.
Based on this, we can conclude that when we write:
$ time sleep 1
real 0m1.002s
user 0m0.002s
sys 0m0.000s
real
is the real time, meaning the actual time (sometimes called wall clock time) spent in the process. user
is the CPU time (CPU cycles * frequency) spent executing code in user mode and sys
is the CPU time (CPU cycles * frequency) spent by the kernel executing in system mode on behalf of the process.
To paraphrase your problem:
Why doesn't real
time reported by time(1)
match my watch?
When you run an OS on bare metal, you'll usually have a battery-powered crystal oscillator which runs at a constant frequency. This-hardware clock will keep track of the time since the epoch. The number of oscillations per second can be tuned to correct for drift (see hwclock(8)
).
time(7)
also says:
The accuracy of various system calls that set timeouts, (e.g., select(2), sigtimedwait(2)) and measure CPU time
(e.g., getrusage(2)) is limited by the resolution of the software clock, a clock maintained by the kernel which
measures time in jiffies. The size of a jiffy is determined by the value of the kernel constant HZ.
The hardware clock is used to initialize the system clock (which would otherwise only know the time since boot). I suspect your hypervisor (virtualbox) uses some hwclock to initialize the time. After that, the software clock takes over.
rtc(4)
says:
[hardware clocks] should not be confused with the system clock, which is a
software clock maintained by the kernel and used to implement
gettimeofday(2) and time(2), as well as setting timestamps on files,
and so on.
What we just learned here is that time(2)
(which is the library calls used by the utility time(1)
) actually gets info from the system clock, not the hardware clock.
The software clock is maintained by the kernel which measures time in jiffies
. This is a unit of time determined by a kernel constant. As far as I understand it, a certain number of CPU cycles will increment one jiffie. So if the OS thinks the CPU is running at 2.0 GHz, but the CPU is actually running at 1.0GHz, then one jiffie would actually take 2ms when compared to a wall clock instead of the expected 1ms.
When running with physical hardware, we tell the CPU how fast we want it to run (slower for power-saving, faster for performance), then we assume that the hardware does what it promised because physical hardware does do that. The trick is that when the "hardware" is virtual, then the hypervisor decides how to control the virtual CPU, not the laws of physics.
A hypervisor running in userspace (like virtual-box) will be at the mercy of the host kernel to give it the cycles it needs. If the host system is running 1000 virtual machines, you can imagine that each guest VM will only get a portion of the CPU cycles it was expecting, causing the guess system clocks to increment at a slower rate. Even if a hypervisor gets all of the resources it needs, it can also choose to throttle the resources as it sees fit, leaving the guest-OS to run slower than it expects without understanding why.
Answer from Stewart on unix.stackexchange.com