Condividi tramite


Making time go faster

I got the following email the other day from a reader:

I cam across your post on pc time and thought you may have a quick answer. Thanks in advance.

I have a program that is a reader that reads a log and sends a message to a second program that is the counter that counts the number of events it gets in a five minutes.

Each entry in the log has a time stamp and is sequential. The log file reader reads the log and waits until it has hit the time in the time stamp (hour, minute and second) and then sends that event to the listner at that time. So to process a log for one day this set takes one day.

I was wondering if there was some way that I could speed up the clock so I dont have to change the logic of either of the programs but have the task completed in say 4 hours (by spedding up the clock by 6 times).

That's a fascinating idea, actually - is there a way of making time go faster?

Whenever someone asks a question like this one, it's time to fall back on the tricks of the master.  Whenever he gets a question like this, Raymond always turns around and asks a question like "What if <x> could happen?" (or "What if two applications tried to do <y>?").

So let's ask the question: "What if there was a way of speeding up the system clock so you don't have to wait so long?"

What would that do to system timers?  If the entire system time is running faster, then they would also run faster. Would there be a consequence if something that's supposed to happen every 5 minutes all of a sudden started happening every 5 seconds?

How about timeouts?  In other words, if I read some data from the disk and wait (with a timeout) for the disk read to complete, will that timeout run faster?  What happens the read timeout is 3 seconds and the disk normally takes 2 seconds for an I/O to complete?

Monitor refresh frequency?  If your app is synchronized to the monitor refresh wait, and the application all of a sudden starts running twice as fast, what would be the results?

If you're trying to make the system run faster than in real-time, it seems like all these would also speed up (otherwise you couldn't get the log playback operations to be faster, since these all run off the same timer).

This is the crux of the problem - if you want your sleeps to go faster, then all the system events also need to be faster.  Unfortunately, there's all this physical hardware attached to the computer, and that physical hardware takes time to complete operations.  If you also have timeouts associated with those hardware operations, then you may introduce real problems.

So the simple answer to the question is: "Not usually".  It might be possible to do something in a VM environment, and there are some utilities on the web that claim to be able to do this, but I wouldn't recommend them.

Why can't you change the program that plays back the log to simply not wait as long?

Comments

  • Anonymous
    July 26, 2005
    This made me think of a follow-up. At one point I did some x86 assembly coding in DOS. In the process I would hook into the system timer to make events happen on a schedule. Often I would make a mistake and make the clock run much slower or faster than it was supposed to. Would something like this effect a modern Windows OS, or does it not use the RTC for timing purposes anymore?
  • Anonymous
    July 26, 2005
    Actually you can make time faster - check the timeXxx APIs which change the clock rate for Windows. But it doesn't effect timers and timeouts.

  • Anonymous
    July 26, 2005
    He could always attempt to accelerate himself close to the speed of light. IIRC, Einsteins famous theory suggests that this results in time slowing down for the traveller (which, relatively speaking, means external time has speeded up).

    Seriously, though, I don't think the timeBeginBeriod/timeEndperiod actually speed up or slow down time measurement. My interpretation of the docs is that they simply change the resolution/granularity of the timer updates.

    That said, I suppose it would be possible to "speed up" the clock by simply polling the system clock and advancing system time by 1 second every x milliseconds (where x is a number < 1000). Very bad idea, I think, but surely it is actually possible... That would, of course, affect only programs which actually query the actual time rather than a tickcount.
  • Anonymous
    July 26, 2005
    Sorry for replying to my own comment, but it would seem that this can be rather easily accomplished using SetSystemTimeAdjustment (if the user is running an NT-series OS and has the appropriate privileges).
  • Anonymous
    July 26, 2005
    The comment has been removed
  • Anonymous
    July 26, 2005
    That's what SetSystemTimeAdjustment does. It specifies the way the clock interrupt modifies the time-of-day clock.
  • Anonymous
    July 26, 2005
    The comment has been removed
  • Anonymous
    July 26, 2005
    The comment has been removed
  • Anonymous
    July 26, 2005
    The comment has been removed
  • Anonymous
    July 26, 2005
    Pretty sure you can force this behaviour per app with a shim. I think in the appcompat toolbox of testing there was a shim for testing timer looparound by overriding the return of timeGetTime or somesuch for the shimmed app. So I would think it's possible, depending on how your app gets time...
  • Anonymous
    July 26, 2005
    I'm underwhelmed by all of this. All of these comments seem to be trying to solve the symptom instead of the problem. Problem - this person would like a quick way of gathering some kind of audit frequency data from an existing log. The solution is not to use software designed to monitor real-time for log flooding. The solution is to use some other software to parse the log and extract that data. Seems like the kind of thing one could solve with a fairly simple script.

    If all you have is a hammer . . .
  • Anonymous
    July 27, 2005
    I seem to recall the mouse driver on the Amstrad PC1512/1640 did that -- it sped up the periodic interrupt from 18.2 times a second to some multiple of that so that it could poll the mouse position registers more frequently, and then only passed on every n-th tick to the previous interrupt handler.

    I also seem to recall that this occasionally caused havoc with the time-of-day clock when otherwise software tried to play with the same timer or assumed it was still running at the usual 55ms interval...