Skip to content
This repository has been archived by the owner on Apr 22, 2023. It is now read-only.

Date header generation #311

Merged
5 commits merged into from
Nov 17, 2010
Merged

Date header generation #311

5 commits merged into from
Nov 17, 2010

Conversation

mnot
Copy link

@mnot mnot commented Sep 29, 2010

As per RFC2616 section 14.18, Date headers are required in responses.

This patch adds them (except when already set), and caches the date string once per second to avoid the perf it of system calls and conversion to a string.

In my testing, the performance impact is negligible (i.e., lost in the noise; runs with and without it get around 14,000 req/sec for hello world, whereas generating the date each time results in an ~11% perf drop).

I haven't documented sys.utcDate (the date cache), as it's not clear it's useful elsewhere. Am happy to do so, or move it into http.js, if you think that's better.

@zimbatm
Copy link

zimbatm commented Sep 30, 2010

I don't remember if node exposes it, but libev already has the "current time", which should be precise enough for that usage. No syscall is required since the value is already in user-space, and you can remove the cache mechanism.

@mnot
Copy link
Author

mnot commented Sep 30, 2010

I looked for that, but couldn't find anything. If it were exposed, this would be my preference as well (although a cache may still be necessary; in my testing, formatting the date string on every request had a significant impact).

@ry
Copy link

ry commented Oct 1, 2010

In general I like the idea, but:

  • I don't like the utcDate() thing. If you don't cache it is it noticeably slower? libev is knows the current time - it's sending a syscall every iteration to look it up.... We could expose that somehow.
  • You're using 4 space indention instead of 2
  • Can you add a test for the setting of the date

@zimbatm
Copy link

zimbatm commented Oct 1, 2010

I'm really sure that if we can use ev_now() [1], the time lookup overhead will disappear, but it is not exposed to JavaScript. According to the doc [2] ev_tstamp is an alias for double, which is what v8::Date::New() accepts so it would be perfect.

I tried looking for an existing module to host that new function, but to my eyes, none of them really match in terms of categorization. I will try to build a patch to provide process.now() so that we can test that out.

[1] http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod#GLOBAL_FUNCTIONS
[2] http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod#TIME_REPRESENTATION

@zimbatm
Copy link

zimbatm commented Oct 1, 2010

I just posted a broken patch that exposes ev_now(). Even if the date is not right, I would be curious to see what it changes in terms of performance. What methodology did you use to test the performances ? More specifically, what client did you use to test the request throughput ?

@ry
Copy link

ry commented Oct 1, 2010

have you done any benchmarks?

i'm not sure ev_now() will be faster given the overhead of the c++ function call..

@mnot
Copy link
Author

mnot commented Oct 1, 2010

I benchmaked on two boxes (core i3 and corei5, on gigabit) at home using Siege, just doing a simple Hello World.

Rough numbers (I'm not trusting siege yet):

  • without patch: 14,300 req/sec
  • Date patch, no caching: 13,100

The interesting thing for me was that caching just the Date object didn't really help; it took it to about 13,400 IIRC. It's the .toUTCString() that seems to be expensive, which is why a cache still might be necessary even if ev_now is exposed.

I'll fix spacing and add a test.

@zimbatm
Copy link

zimbatm commented Oct 2, 2010

Hi mnot,

can you re-do your benchmarks with the latest patch I posted to the list ? By caching the Date() object, I was able to improve the process.now() performances tenfold. Also, since the Date() object doesn't change, you could cache the computed utcDate on it and avoid the timeout mechanism.

@mnot
Copy link
Author

mnot commented Oct 2, 2010

exports.utcDate = function () {
var d = new Date();
return d.toUTCString();
}

gives 13900 req/sec.

Whereas changing it to:
var d = process.now();
return d.toUTCString();

gives 13600 req/sec. Not sure why it's slower.

I ran each one a few times to warm up caches, etc.

Caching the date string (generated by new Date()) as per the patch gives 14600 req/sec.

Returning a static string "foo" from utcDate gives 14600 req/sec (all of these numbers should be taken with a big grain of salt, of course).

I don't doubt that having the ev date available is useful and a perf win (e.g., for logging); it's just that here, since the string has a one-second resolution, caching the string itself gives benefits that dwarf the overhead of the date syscall.

I also wonder if V8 is pulling some optimisations of its own. Given this test script:

var f = new Date();
var g = new Date();
var h = new Date();

putting it through strace gives:

open("/home/mnot/test.js", O_RDONLY) = 5
read(5, "var f = new Date();\nvar g = new "..., 4048) = 60
read(5, "", 4048) = 0
close(5) = 0
epoll_ctl(3, EPOLL_CTL_ADD, 4, {EPOLLIN, {u32=4, u64=4294967300}}) = 0
epoll_wait(3, {}, 64, 0) = 0
clock_gettime(CLOCK_MONOTONIC, {1955, 256502909}) = 0
fcntl(1, F_SETFL, O_RDWR|O_LARGEFILE) = 0
munmap(0x7fad96595000, 163840) = 0
munmap(0x7fad96570000, 151552) = 0
exit_group(0) = ?

Note that there's only one clock_gettime call...

@zimbatm
Copy link

zimbatm commented Oct 4, 2010

Okay, nice to know that v8 is smart :)

I tried reproducing the tests locally, but I didn't have two machines at hand, and the results are pretty much cpu-bound. I get around 6700 req/s with your patch and 6600 with my funky changes (with the new process.loopTime accessor), all the time with 100% CPU usage.

In the end, I think that those micro-optimisations will be lost in the noise of the more-complicated programs. Maybe the only remaining thing that I would suggest, is moving sys.utcDate as a utility in the http module, since it's highly specify.

This pull request was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants