I recently stumbled upon Native Messaging, which is basically the way for a browser extension (chrome or firefox) to communicate with a native application. A friend was implementing this feature in his rust app and I decided to help out, which also was enough of a motivation for me to finally start learning Rust.
Native Messaging - Illustrated
Courtesy of MDN
Libraries used for Native Messaging
made by yours truly - shameless plug
No but seriously, I didn't find another library for native messaging in C on github, though there were 2 for Rust.
The story (also performance is love)
I already had a pretty decent base built up on the low level and after reading the very well documented MDN and Google docs, and with major help from this crate's code I had these functions-
pub fn read_input<R: Read>(mut input: R) -> io::Result<serde_json::Value> { let length = input.read_u32::<NativeEndian>().unwrap(); let mut buffer = vec![0; length as usize]; input.read_exact(&mut buffer)?; let json_val: serde_json::Value = serde_json::from_slice(&buffer).unwrap(); Ok(json_val) } pub fn write_output<W: Write>(mut output: W, value: &serde_json::Value) -> io::Result<()> { let msg = serde_json::to_string(value)?; let len = msg.len(); // Chrome won't accept a message larger than 1MB if len > 1024 * 1024 { panic!("Message was too large", length: {}, len) } output.write_u32::<NativeEndian>(len as u32)?; output.write_all(msg.as_bytes())?; output.flush()?; Ok(()) }
Which facilitated reading and writing the messages respectively. And it all worked like a charm!
Now I'm really obsessed with performance, probably a bit more than I should be, though I usually don't prefer performance over resource efficiency and safety. So I had to dig further.
I decided to write this in C and also make the Rust code just a bit better for performance and also make both codebases as fair as possible. They both had to do exactly the same thing. Nothing more, nothing less.
So I removed the serde_json
completely and just worked with strings instead, the user can parse themselves. So instead of-
let json_val: serde_json::Value = serde_json::from_slice(&buffer).unwrap();
I used-
let val: String = match String::from_utf8(buffer) { Err(why) => panic!(why.to_string()), Ok(val) => val };
Same for the output too, I just removed let msg = serde_json::to_string(value)?;
completely and changed the parameter to &str
instead.
Not much difference syntactically but definitely a great boost to performance! The C
code also did the same thing, here's how that looked like-
char* read_input(FILE* stream) { uint32_t length; size_t count; int err; count = fread(&length, sizeof(uint32_t), 1, stream); if (count != 1) { if (feof(stream)) { fprintf(stderr, "Unexpectedly encountered EOF while reading file\n"); } else if ((err = ferror(stream))) { fprintf(stderr, "An error occured while reading file, err code: %d\n", err); clearerr(stream); } return NULL; } char* value = malloc((length + 1) * sizeof(char)); if (value == NULL) { fprintf(stderr, "An error occured while allocating memory for value"); return NULL; } count = fread(value, sizeof(char), length, stream); if (count != length) { if (feof(stream)) { fprintf(stderr, "Unexpectedly encountered EOF while reading file\n"); } else if ((err = ferror(stream))) { fprintf(stderr, "An error occured while reading file, err code: %d\n", err); clearerr(stream); } free(value); return NULL; } value[length] = '\0'; return value; } size_t write_output(const char* const value, uint32_t length, FILE* stream) { size_t count; int err; if (length > (1024 * 1024)) { fprintf(stderr, "Message too large"); return -1; } count = fwrite(&length, sizeof(uint32_t), 1, stream); if (count != 1) { if (feof(stream)) { fprintf(stderr, "Unexpectedly encountered EOF while reading file\n"); } else if ((err = ferror(stream))) { fprintf(stderr, "An error occured while reading file, err code: %d\n", err); clearerr(stream); } return -1; } count = fwrite(value, sizeof(char), length, stream); if (count != length) { if (feof(stream)) { fprintf(stderr, "Unexpectedly encountered EOF while writing file\n"); } else if ((err = ferror(stream))) { fprintf(stderr, "An error occured while writing file, err code: %d\n", err); clearerr(stream); } return -1; } fflush(stream); return length + 4; }
Yeah I know, a lot of error handling. But! It's time to benchmark! So how do they fare?
Hold up! Not quite yet! Let's check the profile I used for optimization on the rust side
[profile.release] opt-level = 3 debug = false debug-assertions = false overflow-checks = false lto = true panic = 'abort' incremental = false rpath = false
Looks quite juicy for the benchmark right? I'd say so!
What about C? Well, I didn't have a gcc
+ Native Messaging setup handy so I had to use clang
.....on windows, long story. This was kind of a disadvantage. All I could do was -Ofast
. That's it.
Yes, I know. Not very fair on the optimization side. But I did some testing on the non optimized versions of both, and C was absolutely wiping the floor so I figured - "whatever"
Also the benchmark was all done on the javascript side and the following is the connectionless version. So each time a message is sent, the extension invokes the executable, waits for it to read and send a response. This is how it looked like-
var start; chrome.browserAction.onClicked.addListener(() => { console.log('Sending: ping') start = performance.now(); chrome.runtime.sendNativeMessage("pingpong", {text: "ping"}, onResponse); }); function onResponse(res) { let end = performance.now(); console.log(`Received: ${res.msg}, Took: ${end - start} ms`); }
Alright benchmark time, here's what rust looked like-
And here's C-
Close, very close! But C still seems to win just a bit over. Impressive results nevertheless, how about some more optimization?
That question lead to many, many code revisions - a lot of rust programmers helped me out so much to optimize the code. So after many, many revisions. This is how the rust code looked like-
pub fn read_input() -> io::Result<Vec<u8>> { let mut instream = io::stdin(); let mut length = [0; 4]; instream.read(&mut length)?; let mut buffer = vec![0; u32::from_ne_bytes(length) as usize]; instream.read_exact(&mut buffer)?; Ok(buffer) } pub fn write_output(msg: &str) -> io::Result<()> { let mut outstream = io::stdout(); let len = msg.len(); if len > 1024 * 1024 { panic!("Message was too large, length: {}", len) } outstream.write(&len.to_ne_bytes())?; outstream.write_all(msg.as_bytes())?; outstream.flush()?; Ok(()) }
Got rid of all dependencies and instead of converting to String
, returned the raw bytes. Also make stdin
and stdout
constants as those are always the streams used for Native Messaging anyway.
The C code also changed accordingly, though not as many revisions-
uint8_t* read_input() { uint32_t length; size_t count; int err; count = fread(&length, sizeof(uint32_t), 1, stdin); if (count != 1) { if (feof(stdin)) { fprintf(stderr, "Unexpectedly encountered EOF while reading file\n"); } else if ((err = ferror(stdin))) { fprintf(stderr, "An error occured while reading file, err code: %d\n", err); clearerr(stdin); } return NULL; } uint8_t* value = malloc((length + 1) * sizeof(*value)); if (value == NULL) { fprintf(stderr, "An error occured while allocating memory for value"); return NULL; } count = fread(value, sizeof(*value), length, stdin); if (count != length) { if (feof(stdin)) { fprintf(stderr, "Unexpectedly encountered EOF while reading file\n"); } else if ((err = ferror(stdin))) { fprintf(stderr, "An error occured while reading file, err code: %d\n", err); clearerr(stdin); } free(value); return NULL; } return value; } size_t write_output(const uint8_t* const value, uint32_t length) { size_t count; int err; if (length > (1024 * 1024)) { fprintf(stderr, "Message too large"); return 0; } count = fwrite(&length, sizeof(uint32_t), 1, stdout); if (count != 1) { if (feof(stdout)) { fprintf(stderr, "Unexpectedly encountered EOF while reading file\n"); } else if ((err = ferror(stdout))) { fprintf(stderr, "An error occured while reading file, err code: %d\n", err); clearerr(stdout); } return 0; } count = fwrite(value, sizeof(char), length, stdout); if (count != length) { if (feof(stdout)) { fprintf(stderr, "Unexpectedly encountered EOF while writing file\n"); } else if ((err = ferror(stdout))) { fprintf(stderr, "An error occured while writing file, err code: %d\n", err); clearerr(stdout); } return 0; } fflush(stdout); return length + 4; }
And thus, it was benchmark time again! The final iteration!
Here's Rust-
And here's C-
Well, would you look at that...not much difference from the last time. Well, I sure as hell was disappointed.
Just as I was about to call it a day benchmark, someone suggested I should use connectionful messaging, because rust executable invokations take a long time, far longer than C due to panics and a whole lot of other setup. I decided, "sure I could try that!"
So I switched my js code to this-
var start; var port = chrome.runtime.connectNative('pingpong'); port.onMessage.addListener(function(res) { let end = performance.now(); console.log(`Received: ${res.msg}, took: ${(end - start) * 1000} Ξs`); }); port.onDisconnect.addListener(function() { console.log("Disconnected"); }); chrome.browserAction.onClicked.addListener(() => { console.log("Sending: ping") start = performance.now(); port.postMessage({ text: "ping" }); });
And also changed my native app to use an infinite loop - to keep listening for input
This actually brought the accuracy down to microseconds! So I had to change the timer to that instead, how astonishing!
And thus, here be the final results-
Rust-
C-
Incredible! Now that's a feat. Averaging these 2 results, with 1 anomaly from each result discarded (1k+ microseconds for C and 600+ microseconds for Rust), we can see that C beats Rust by just 20-30 microseconds!
And thus, C is the winner! Though Rust put up a very admirable fight.
If you'd like to test out the code yourself, and I insist you do as my benchmarking really isn't very sophisticated, you can find the code in my gists-
This has been very fun! And a super nice start to my rust journey! I think this was a pretty practical demonstration of stdin and stdout IO speeds for Rust vs C. I hope to continue these benchmarks in the future, hopefully focusing on other practical topics.
Once again, if you're looking forward to implementing Native Messaging in your C application, please check out libnativemsg!
Top comments (1)
Nice article, thanks for putting this out there! The bare-bones example helped me do a quick sanity check whilst developing my own native messaging host in Rust.
However, there is currently a problem with the code in write_output as it exists on Github at the time of posting this comment:
let len = msg.len();
if len > 1024 * 1024 {
panic!("Message was too large, length: {}", len)
}
outstream.write(&len.to_ne_bytes())?;
This breaks the native messaging protocol when compiling on a 64-bit platform, because len is of type usize, which will be 8 bytes in length. The protocol expects only a 4 byte header denoting the message size. This is fixed easily enough with:
let len = msg.len() as u32;
Thanks again!