IDE tools released, my cgi.d gets new features

Posted 2018-12-31

Core D Development Statistics

In the community

Community announcements

A new DMD release is scheduled for the coming days, including turning on -dip25 by default soon. This is requiring the return keyword on incoming references to functions if those same references may be returned from the function.

I plan to update my code to support this in the coming week or two, and I will write about it when I do (tbh I haven't really used this stuff at all yet).

See more at the announce forum.

What Adam is working on

This Week in D improvements

I updated the RSS feed generator, making it live again. I also have decided that these current URLs will be its permanent home, so you can go ahead and bookmark, add feeds, whatever.

Likely coming next week

I have decided that adrdox will need to document version identifiers, since those form part of the compile-time API of the module... sort of. I will code that and write about it probably next week. D's version keyword is kinda a pain to use, but I think we can tame it, to some extent. cgi.d uses various weird combinations of versions, and making that sane will be a nice little benefit to me too.

cgi.d's new stuff in the works!

The Impetus

I closed a simple board game to the web last week. It was about 150 lines of code, about 100 of them in D, and the other 50 in CSS, Javascript, and HTML. Very simple stuff - being a board game, the point was just for the computer to set up the board, and then we'd play it with two computers.

But, when we actually tried playing it, looking between the two screens (the public "questions" view was on an ipad on the table, and the private "answers" view was on my laptop) was a pain. See, they shared a random number seed, so it could statelessly show the same board, and in theory, we could easily look at the player's view to see their current position, but in practice, we found ourselves craving live updates between the two screens too.

Well, of course, this could have been trivial - it is a simple board game and I was the only user (even if across two devices), so all kinds of simple options were available. But I have also been wanting to play with a new idea for a while - add-on servers to the CGI core - and this seemed like a good opportunity (especially since I have the week off from my day job!) to go a little crazy and ridiculously overengineer this solution.

Well, overengineer for what it does. But the solution would also shore up one of cgi.d's deficiencies for certain modern web applications, and enable me to reuse the foundation for other purposes.

The Background

cgi.d is my base library for writing D web servers. It started off as a minimal helper module for the CGI protocol. CGI is a language-neutral standard interface between production web servers and applications. Over the 10 years it has been in active use, cgi.d has picked up other modes too, including the FastCGI protocol, which allows for reusing an application instance over several requests, the SCGI protocol, which tries to simplify FastCGI, and cgi.d can also speak HTTP directly via an embedded server, with two major handling modes.

All five of these protocol modes are abstracted away for user code behind the same interface, the Cgi class, which is passed to a user-defined request handler function. The other details of setup are hidden behind a GenericMain mixin (which actually just calls a cgiMainImpl function now - actually until this week, it called *another* mixin, but there was no reason for that and it complicated imports! I removed that this week, which is a small breaking change, but meh, it is trivial - import your own Phobos modules.). Since the cgi.d library also traditionally provides its own main function, it does all the setup for each protocol for you, and it is able to provide a command line interface too, parsing args into a mock HTTP request for your application. This will be useful later :)

The five protocol modes are importantly different in a few key ways for some advanced usage though, notably, in the process model.

ModeConcurrency StyleManaged Externally
CGIOne process per requestYes, on each request
FastCGIDynamic process poolYes, by master httpd
SCGIThread poolNo*
HTTP/ThreadsThread poolNo*
HTTP/ProcessesProcess poolBy cgi.d's master process
* Note that the thread pool server can be restarted externally if the server crashes.

The differences here, while not affecting the cgi module's normal interface, are profound if trying to store data in between requests.

With regular CGI (cgi.d's default compilation mode), your program gets a fresh process for each HTTP request. This means any attempt to store cross-request data in variables in your program will fail. On the other hand, with this model, you get very high crash resiliency, simple deployment on an existing web server setup, and generally simple-to-understand code.

With FastCGI (compile your program with -version=fastcgi, though this requires a fcgi C library available - the only cgi.d mode with any external dependencies!), the process management is typically outsourced. With Apache, for example, it will spawn new processes and terminate old processes based on its configuration directives. On other production web servers, you would typically use the spawn-fcgi command to manage them, with its own configuration setup. These give good crash resiliency (the parent program will restart a crashed worker for you), but little predictability to the cgi.d user, so you cannot rely on your global variables surviving.

cgi.d's embedded HTTP server running in process mode (the default with -version=embedded_httpd on Linux, not guaranteed to be available on other OSes, but you can try -version=embedded_httpd_processes if you want to attempt to force it), you get some predictability and even some control - a static constructor can even change the (currently undocumented though) processPoolSize in your own code - and decent crash resiliency (cgi.d itself will restart crashed workers) - but requests are scattered randomly across the worker processes, each of which have their own memory space. This makes it difficult to share data between requests.

With cgi.d's embedded HTTP server in threaded mode (-version=embedded_httpd on non-Linux OSes, or -version=embedded_httpd_threads if you want to force this model), as well as cgi.d's SCGI mode (-version=scgi, which uses the same thread management code), things are different: there is only one process, and cgi.d and you are in total control over it... unless one crashes. Without process separation, a crash of any request will bring the *entire* server down.

But, on the other hand, you can use shared global variables to store cross-request data, just mind your thread safety. You can even pass request socket handles around if you want to get a little hacky (though I don't recommend trying that yourself!).

Bottom line: you CAN store stuff in memory if you want, especially if just building with one of the thread modes, but you sacrifice some genericness and crash protection if you do.

Also, while cgi.d supports websockets in theory, none of these models are conducive to long-lived requests. Traditional cgi mode is best for this - you can have thousands of processes spawned to handle clients - but it isn't super efficient. In all other modes, something like a websocket will keep a worker busy and can quickly crowd out new requests, seemingly locking users out of your site. A whole new model would better cater to that use case. (Indeed, I have previously referred people to the vibe.d project for those instead, though I would point out that the vast majority of web sites can work fine with standard HTTP as originally intended!)

On the other hand, all of cgi.d's modes provide an easy API and broad compatibility with outside libraries, and I wouldn't want to sacrifice that in the name of more efficient websockets. I want to have the best of both worlds.

The New Idea

Going back to my impetus, I wanted to communicate a user action between requests, and send that data to any listening browsers. Possible ways to implement this include:

  1. Use an existing external method to store necessary state or events (like FIFO pipe, or a database, or even an ordinary file, all possible since I was the only user) and use long polling to communicate changes to the browsers. This is the sane solution overall - it would have reached "good enough" for my needs in about 15 minutes of work, but where's the fun in that?!
  2. Use something like WebRTC for peer-to-peer communication. I've never actually used it before, but I know it can support this use case. It is even arguably the best fit from a technical perspective and somewhat educational for me... but meh, I just wasn't feeling like it.
  3. Use a websocket with cgi.d's existing support. It'd work, and again, it needs to support TWO concurrent requests, so the theoretical weaknesses do not apply. But, I'd still need to communicate data across them, so this is basically a slight variation on #1, just using a newfangled fancy websocket instead of a traditional HTTP polling solution. Meh.
  4. Use a websocket with a new event loop backing it. Now things're getting interesting!
  5. Use an EventSource with a new event loop backing it. Ooooooooo! (IE and Edge don't support them (it was listed as "under consideration" but with Microsoft dropping their engine, I guess it is all moot now anyway), but you can emulate with other methods like websockets in Javascript, or just not worry about it, since I'm the only user and it works in my Firefox...)

So, I decided to go with the last couple options (settling on EventSource for my game, since it was just kinda lovely, but the code is written with websocket in mind too and will soon be able to work just as well for both).

Yes, it is time to write another new event loop and process manager inside cgi.d!

Event Driven I/O

There's been a lot of talk about event I/O in recent years. With the release of node.js, new vigor was breathed into the concept as it became more easily available to the web development community.

D has its version of the concept too, in vibe.d, which aims to simplify the code compared to node.js by putting the handlers in fibers, which automatically yield and resume where left off on I/O calls, instead of using explicit callbacks or promises or what-have-you.

cgi.d, however, has not gotten on this bandwagon, opting to keep its traditional layout. In part, this is because I didn't want to break my longstanding policy of commitment to backwards and forwards compatibility with itself and with dmd/phobos (all within reason), and in part because I didn't want to break my policy of avoiding dependencies, but a big part of it is still my vision of an easy API and easy interoperability with third party APIs.

cgi.d is over ten years old, and over that decade, I have used it on lots of projects, interfacing with databases, email servers, web servers, C libraries, even with PHP components. This has all been easy because cgi.d is not very opinionated - it works with just about anything else you want to throw at it.

And a good part of that is its I/O models! With vibe.d, if you want to use a database, you are probably going to look for a vibe-specific database interface, that knows about fiber yielding. If you just call a regular blocking function (including an external library's event loop!), you risk breaking the whole server's execution model. What other fibers are waiting on you? Of course, it does provide a worker thread pool and can work with user-defined threads that call those libraries, but it is extra work. With cgi.d, such things just work and provide acceptable behavior when used in the most direct, basic manner.

I like that a lot. I might not be winning many artificial benchmarks (though cgi.d's performance is much better than many popular frameworks!), but it is very easy to use and very adaptable. I don't want to lose those advantages.

But at the same time... there ARE tasks where the event-driven I/O model is an excellent fit, and one of those are when you have a great many long-lived connections that you can handle quickly (in fact, that is like the short description it's ideal use case!)... you know, like my turn-based game! (well, ok, it has 2 connections. But that's beside the point, this is WEB SCALE!!!!)

The Best of Both Worlds, Part I

Captain's Log, stardate 43989.1. cgi.d has arrived in event I/O land in response to a distress signal from a turn-based game colony.

Given all the background above, I decided that an elegant solution would be to have the CGI program - in whatever mode its in (though I might do it in-process upon request for the threaded modes, because I can, but that is a future possible direction) - to spawn a helper process and pass over specific connections to that helper process if they want to opt-in to the new event loop.

This, of course, has more initial overhead than just using an event system in the first place, but I deem this unimportant because the passed-off connections are supposed to be long-lived anyway, so the initial cost will pay itself off quickly enough, and we don't have to sacrifice anything in terms of backward compatibility, flexibility, or usability in the main library. It is all still there!

Moreover, keeping it all there means you can still use all that stuff for your setup - want to do authentication in the connection handshake? Go ahead, you can use the database library. You can access the session cookies or other headers through the same Cgi class interface you already know. (Or if you don't already know it, it is almost certainly similar to what you already know from other frameworks! cgi.get["var"]; cgi.write("hello"); etc.) You can use the garbage collector with wild abandon, even "stop-the-world" doesn't cross process boundaries! It remains easy to use, even when using the new features.

And, I coded it in such a way that I can use it for three different internal tasks: an event distributor, a websocket event manager, and a session storage helper. And since I can use it for three different things, that means it is realistically generic - I am very likely to make that underlying functionality available for library users too, after a while (I want to make sure I am actually happy with it before publishing it though - once I publish it, I commit to backward compatibility, remember.)

How, exactly, does it work though?

Mr. Worf... fire.

The Best of Both Worlds, Part II - Technical Details

On the usage side, the EventSource server looks something like this:

1 import arsd.cgi;
2 void requestHandler(Cgi cgi) {
3 	// the client sent us a new event - forward to the event server
4         if("click" in cgi.post) {
5 		// args: event bucket, event type, event data, event lifetime
6                 sendEventToEventServer("just-testing", "click", cgi.post["click"], 0);
7                 return;
8         }
9 
10 	// the client wants the stream - pass off this connection to the event server
11         if("event-stream" in cgi.get) {
12 		// args: Cgi instance, event bucket
13                 sendConnectionToEventServer(cgi, "just-testing");
14                 return;
15         }
16 
17 	/* can handle other types of requests here normally */
18 }
19 mixin GenericMain!requestHandler;

And, of course, the Javascript side is trivial:

1 // send clicks to the cgi program
2 document.getElementById("board").addEventListener("click",
3 	function(event) {
4 		var req = new XMLHttpRequest();
5 		req.open("POST", "my_program");
6 		req.setRequestHeader('Content-type',
7 		'application/x-www-form-urlencoded');
8 		req.send("click=" + event.target.id);
9 	}
10 );
11 
12 // receive clicks from the cgi program
13 var source = new EventSource("my_program?event-stream");
14 source.addEventListener("click", function(event) {
15 	var ele = document.getElementById(event.data);
16 	if(ele)
17 		ele.classList.add("player-clicked");
18 });

BTW, I think the EventSource api is underrated. When websockets came out, most attention shifted to them instead, but there are a lot of nice cases for the long-lived read-only event connection coupled with regular HTTP posts for updates, like I did here. It meshes nicely with existing REST api designs, has built-in reconnect and replay facilities too. But I digress a little.

Now, let's go under the hood.

When the user code calls arsd.cgi.sendEventToEventServer or arsd.cgi.sendConnectionToEventServer, the first thing it does is try to connect to an existing event server, via UNIX domain socket (on Posix) or named pipe (on Windows). The name of this connection is unique to the cgi.d client build, allowing you to customize the code for this application and run alongside other versions on the same server. (I may change my mind on this, since recompiling any update will assume a new server needs to be spawned. I'll probably make it a template argument or static constructor that only changes if you actually want to customize it.)

If this fails, it will start the server automatically. The server is embedded in the executable as part of the arsd.cgi library, so it just needs to start a copy of itself, passing the --event-server comment-line argument. Remember, cgi.d provides its own cgiMainImpl through arsd.cgi.GenericMain with command line argument handling already! This works in all compilation modes (though I might let you version this out, I am going to compile it in by default), including traditional CGI, though note it may spawn the event server as the user account the web server runs as. If you want to control that, you should start it manually. It starts via CreateProcess on Windows, and posix_spawn on Posix right now.

OF course, once it starts the server, it attempts to reconnect. Being a local connection, it will fail quickly if it is meant to fail and thus has a low time out, assuming the server startup failed. This will throw an exception if it fails. (This detail is not yet implemented.)

And after that, it is just regular IPC for sending events! The event connection is a little more involved: it needs to pass a file descriptor across process boundaries. My first thought, of course, was to just fork() and inherit the handle. But, of course, then I couldn't reuse the existing server, rendering that nice event loop useless! So, instead, I used the SCM_RIGHTS feature of Unix sockets, or WSADuplicateSocket to pass it over. In the case of EventSource, it also starts the connection pre-hand off, flushing response headers and inspecting Cgi's private members to learn what protocol the new process must use (is it a websocket? if so, it must be packed into a message. or a cgi connection? if so, it must be direct, plain text. or a raw HTTP connection? if so, it needs to be made a HTTP response chunk.)

Once the server receives a connection, it adds it to its event loop - which uses the OS-specific API for event I/O: epoll on Linux, IOCP on Windows, and once I get around to implementing it, kqueue on BSD and Mac.

Why didn't I use one of the various event libraries? Meh, I already know the underlying APIs (arsd.simpledisplay has a full-featured event loop too, though that is biased toward GUI needs, the principals are the same), and I wanted to make sure I had the low-level access for that fd/handle passing. And besides, I might as well make it as good as I can, custom-tailored for my specific use cases. And besides, cgi.d is a stand-alone module. It does everything it needs to do itself - and I like that a LOT.

Finally, the event code will store and distribute as needed, issuing event IDs and watching last-event-id headers to catch up. It works!

Future Directions

I like EventSource quite a bit, and will probably do a client implementation of it too, probably in arsd.http2 module.

I also need to finish the WebSocket support in this new system, which will work basically the same as the event system, except the event loop will have to be fully customizable. I'll probably do this via subclassing, but I haven't decided on the API yet.

I also want to use the server to store session data between requests - solving one of the problems I brought up in the background section. Right now, arsd.web has various session options like files, in-memory, or signed cookies, but I really want to make a web.d 2.0, probably right in cgi.d. And this session server is one of the first steps. You could also use cookies or other servers easily enough in cgi itself, and none of that will change - just because cgi.d has some session code, it doesn't mean you have to use it.

Lastly, I am tempted to make a delayed job server. Here's what would be cool about that: you'd call a function with a template and some arguments. It passes that to the worker process, which forks to run it (for crash resiliency) at the appropriate time.

I envision the usage code as looking something like this:

1 void func(string arg) {}
2 
3 void req(Cgi cgi) {
4    scheduleJob!func("arg").at(DateTime.now + 10.minutes);
5 }
6 mixin GenericMain!req;

But how would it work with a function? I haven't coded this yet, but consider the following:

1 void function(string[] args)[string] functionsById;
2 
3 template functionId(alias fn) {
4 	alias functionId = fullyQualifiedName!fn; // or something
5 	static this() {
6 		functionsById[functionId] = wrap(&fn);
7 	}
8 }
9 
10 ScheduledJob scheduleJob(alias fn, T...)(T args) {
11 	return ScheduledJob(functionId!fn, formatArgsForLater(args));
12 }

No matter where you call the template function, it will have to compile that instantiation... which will define the static constructor, which will put all the functions in a runtime table before any of them are used!

Then, we just call the job through command line arguments. And the formatArgsForLater and wrap functions handle type conversions, serialization, etc.

I see no reason why this wouldn't work, and I think it would be really cool and fairly uniquely D. (Well, not really unique, I guess, you could call stuff from reflection/interpretation in most languages. But it would only generate the reflection it needs.)

Of course, I could just pass that to another time server... but eh, I kinda want to make it in D anyway. :) I could even embed a little http server - why not, it is in the library already! - to manage the job list too. Maybe that is overstepping the bounds of cgi.d's project scope (which is a reason why cgi.d is fairly long-lived, it aims to do its job and then get out of your way)... but I'm altering the project scope. Pray I don't alter it any further ;)

Can I try it!?

I haven't actually finished this yet. I have it working on Linux, but I want to finish the Windows implementation and make sure it has an adequately generic API before releasing it. And I haven't even started the Mac implementation. So give me another week.

PS I think adrdox has a O(n^2) bit in it somewhere. As I typed more of this post, the regeneration took longer and longer, 3 seconds in the end, yikes! Still acceptable to me, but maybe I will have to optimize that a little too. However, actually writing content in it is easy! And that's what really matters.