Sunday, 7 June 2015

The Flub Paradox

Blub and Flub

Paul Graham's classic essay Beating the Averages is well worth re-reading. It is the story of how, twenty years ago, Paul Graham and Robert Morris built an online store generator called Viaweb and out-manoeuvred their many competitors using their secret weapon, Lisp. But it is much more than a success story with a fairytale ending.

He identified what he called the Blub Paradox: why people persist in using so-so programming languages when they could be so much more productive using something more powerful. He works from the assumption that there is a hierarchy of power of programming languages, starting with Assembly and working up to your favourite language, the one you would love to use in your job. He calls the so-so, median language Blub [1] The paradox is that when Blub users looks down at 'inferior' languages, they can't imagine using such obsolete technology; but when they look up the power hierarchy they don't see greater power, but weirdness and attitude. The reason, Mr Graham thinks, is because they think in Blub.

For example, the Blubbist looks at Haskell, and first sees the different surface syntax, the dismaying lack of curly braces and the mathematical terseness of the function definitions. Then they see that data is immutable, and think: "These variables don't vary! How is it possible to write programs without mutable state? And why?" The point is that they don't recognize the power, and hence don't acknowledge the hierarchy.

Mr Graham's essays about programming tend to assume the natural superiority of Lisp over all other languages; you don't have to agree with him about this to appreciate his arguments. I'll call the 'obviously' superior language in any comparison Flub, for much the same reasons he calls the so-so language Blub. I can then isolate the common characteristics of ueber-languages and their proponents, and emphasize that both Blub and Flub are time-dependent variables. In other words, Lisp was the Flub of its day. A few years ago Flub was Haskell, and recently the new contender appears to be Rust, judging from the buzz. (Note: this is about tracking fashion, not a comment on the actual virtues of these languages.) There has been a lot of new language design happening in the last seven years or so, so I don't doubt that there will be a new value of the variable Flub in a few years. (These are just observations gleaned from reading Reddit and Hacker News, which is probably the closest I get to following sport.) [2]

Joe Hacker and The Continuum of Power

Mr Graham is of course careful to qualify his assertion that there is a natural continuum of power in programming languages. He would not himself use Lisp for everything, since Perl is a better tool for the text slicing-and-dicing of system administration - cooks have more than one pot. He was (after all) a guy getting things done, who wasn't going to let purity and aesthetics slow him down. Contrast with this quote about Edsgar Dijkstra in an interview with Donald Knuth: "It would make him physically ill to think of programming in C++". Which is appropriate; Dijkstra was a computer scientist [3]. But a bit ... weird to those of us who enjoy programming as a means of making vague exciting ideas come alive and making impact. After all, Mr Graham ended up selling Viaweb to Yahoo for $50 million in shares in 1998.

Any closer examination of 'power' in this context reveals it to be a slippery concept. Is BASIC more powerful than Assembly? I doubt it, you can do anything in assembly, and BASIC is deliberately limited [4]. Assembly is very long-winded however, and involves constantly playing with sharp knives; if a job can be done adequately in BASIC, then it should. Using assembly for a BASIC kind of problem is sheer premature optimization and requires a much higher skill level. So BASIC is more expressive than assembly. It will take you even less lines to do the job in Python, so Python is more expressive than BASIC. A better working definition of 'power' here is therefore 'expressiveness'. In the hands of a master, Lisp is more expressive still, because of its meta powers (code that writes code). So Paul Graham and Robert Morris could out-code their competitors; a small posse of smart hackers in tune with their tools can always run rings around larger gangs of programmers led by non-technical people. And that was their secret weapon, ultimately.

So the more expressive a language the better? Not necessarily; consider APL, famous for its one-liners. This represents 'too much of a good idea' at least for people who like typing with normal keyboards. Expressiveness has a lot to do with library support, not just ideas-per-line; Python is well-known for having libraries to do just about anything and finds a lot of use in the scientific community.

'Power' and 'Expressiveness' turn out to be separate concepts. In the end, there is no simple continuum that you can use to line programming languages up, nose to tail, feeble to awesome.

We prefer to reduce vector spaces to scalars, perhaps because of this psychological need to rank things and people. In particular, the multi-dimensional nature of programming space undermines the central Blub argument: "By induction" he writes, "the only programmers in a position to see all the differences in power between the various languages are those who understand the most powerful one".

Except there isn't such a beast.

Blub: Language or Community?

Blub is probably more an attribute of a language's community than the language itself.

Consider Java. In the "Enterprise", only the architect gets to have fun; the developers are not expected to take pleasure in their work, and the feelings of the users are irrelevant, since it is only their managers that matter.[5] This is the very opposite of the Flub spirit.

But Java is a pretty productive language, if used with an independent mind. The proverbial 'small posse' could do very disruptive things with it. To use it effectively, however, requires using an IDE, and Flubbists hate IDEs, partly because their finger muscle memory has been overspecialized from spending too much time in Eighties power editors, but also because using an IDE is too closely associated with Blub practices. (People who can't ride a bicycle without training wheels, basically.) Having recently made my peace with IDEs again, after a long sabbatical, I can attest that they are irritating - like a pedantic pair programming partner. But you learn to ignore the yellow ink, and other fussy ways, to get the productivity boost.

Now I don't doubt you can do better than Java the language, and still remain in Java the ecosystem; Scala shows signs of actual adoption, and Clojure is an attempt at re-Flubbing Lisp. But a underwhelming language is no obstacle to getting things done by determined talent, if the talent is pragmatic enough and does not get physically ill like Dijkstra. Knuth himself wrote TeX in 'literate' Pascal; his idea was to embed the code in a clear narrative, and incidentally get around the limitations of the language (it did not support separately compiled modules, for instance.)

The Flub Paradox

The Flub Paradox can be stated like this: although Flub is self-evidently more powerful and productive than Blub, relatively few people use it and its effect on the real landscape of innovation appears minimal. Where is Flub being used as a secret weapon, in the way described by Paul Graham? Why aren't its practitioners exploiting their natural advantage?

An example of the Flub paradox is the Haskell web framework Yesod; that is not an earth-shattering list of uses, when you take away the Haskell-related sites.

A disruptive startup is more likely to use Blub in creative ways, focusing on the idea, not the implementation. Facebook dominated its market - using PHP - which everyone agrees is crap. Google built up the world's biggest advertising agency using Java and C++; Android is a Java-like ecosystem that runs in most pockets these days.

(At this point, "The Unreasonable Effectiveness of Blub" seems like a good title for an article: Simon Newcomb's analysis of heavier-than-air flight confronted by the tinkerings of bicycle makers. Be my guest.)

Mr Graham says (and it's still true) that "when you're writing software that only has to run on your own servers, you can use any language you want". But the modern web is a dance between the server and the client, and that client is running JavaScript.

The vitriol surronding JavaScript is interesting and revealing. Granted, it is not well designed and was knocked together in an ridiculously short period of time; but it has good influences, like Scheme's closures[6]. It could have easily been something much worse, like Tcl. Yet the Flubbists hate it, it's untidy and used by yahoos (pun intended.) It is an obstacle to the march of Flub in a way Blub never is.

That's also perhaps why so many words are wasted on flaming Go; a deliberately simplified, anti-theoretical language that is gaining ground, particularly in the business of writing server software. Go is like a predatory shark appearing amongst dolphins; competition for resources that also eats your babies. It is that terrible thing, a rising Blub.

If you are a Flub practioner, you want there to be Flub job opportunities; it is thus very important to establish the idea that Flub is the next best thing. A veritable deluge of articles appears, explaining the beauties of Flub and giving crash courses on the mathematical background needed to understand it. For languages that arise in academia, this is necessary work to establish your reputation and secure a comfortable position from where you can continue to write more articles. If they remain in academia, it is necessarily and also sufficient.

That's partly why there is a Flub Paradox; people like Paul Graham who are sharp hackers and prepared to risk their livelihood to follow a startup dream aren't common. Plus, although they may ascribe their success to Flub, it is actually only one of many factors, and the most important of these are talent and motivation. You put a group of merely good Lisp programmers on a project, and embed them in a corporate environment, and the stellar results are likely to be not reproducible. (I don't doubt they will be more productive than Java programmers. But will the result blend?) Big companies understand this problem well, and prefer Blub programmers, who are easier to source and come with less attitude.

Part of the problem is over-selling Flub as a universal panacea (Fred Brooks' "No Silver Bullet" again.) For example, there has been a great deal of heat round the new language Rust from the Mozilla foundation; a strongly-typed systems language with better guarantees of safety.

To use Rust for writing programs that can afford the overhead of garbage collection would be premature optimization. An engineering analogy: titanium is a fantastic metal, stronger than steel yet half the weight - it does not rust; but it's a bitch to work with. and expensive. By Mr Graham's argument, we should prefer it over aluminium, all other things being equal, except that they ain't. Using a more advanced language could be significantly more expensive than just doing it in Java. I'm not just thinking of the developer cost, but the time cost. For instance, it is reported that a 2.4 kloc Rust program takes 20sec for a dev build; that's very long for such a dinky program. Java, Go and C would be practically instantaneous; for comparison, a 10 kloc C++ program takes under 10sec for a full rebuild, and average rebuild times are about a second. It is important that Rust works on its incremental build story. We hear the authentic voice of despair: "getting blocked for several minutes every time I need to recompile is a big productivity killer after a while."[7]

I don't doubt the world needs safety guarantees, given the messes created by cowboy firmware. C is famous for letting you shoot your own foot; in safety-critical applications, you are shooting someone else's foot. Rust is a sincere attempt at solving that kind of problem, as well as other systems software, like operating systems.

However, these are fairly niche areas of development; it is unlikely that major operating systems will emerge written entirely in Rust, and embedded engineers are a conservative bunch - they usually come from an EE background. Also remember that theoretical correctness and safety guarantees are old preoccupations of the industry, and Ada is well entrenched in some parts. I suspect that the idea of a magically safe language will turn out to be yet another silver bullet; it would be interesting to be proven wrong.

Edit: It was probably a mistake to mention actual concrete values for Flub, again see[1]. The point I wish to emphasize has nothing to do with the value of Flub or Blub, and everything to do with not seeing Flub used as a disruptive force of change, as Lisp was in Paul Graham's original essay. It is probably simply too early to tell.

[1] Trashing a developer's language of choice by name is like insulting a patriot's country or his mother's apple pie - the nerd equivalent of bumping the pool cue of a big man in a bar. It makes further rational discussion impossible.

[2] It would be interesting to do sentiment analysis on these august websites and selected social-media feeds, and track the buzz surrounding popular programming languages. Not exactly science, but numbers are good fuel for an argument. The Tiobe Index is approximate and not good at capturing changes in the 'long tail' where Flub lives.

[3] It's entirely possible that programming in C++ can make you physically ill, but the point is that you have to learn it and use it to know. Dijkstra arrived at this conclusion from first principles.

[4] BASIC was originally a teaching language from the University of Dartmouth, baby FORTRAN. Niklaus Wirth's Pascal is another example of a language designed to teach a very structured way of thinking. So criticisms of Pascal are a bit unfair, because it wasn't intended for large scale application. The dialects that appeared later that some of us remember with affection were much more capable.

[5] The Marxist concept of 'alienation of labour' is applicable here, since it always was as much a psychological as an economic concept. Like a craftsperson becoming a factory worker, enterprise programmers are both separated from the value they create and divorced from the pleasure of creating value. No fun, and no big bucks either.

[6] Closures are functions that carry around their surrounding state with them; they are probably the most powerful thing I experienced through Lua. A pity that JavaScript was standardized quickly in such an uncooked state, since the steady development of Lua produced a much cleaner, simpler design. In many ways it is like JavaScript without braces: prototype-based object system, field references are lookups (a.f == a['f']) but with proper lexical scoping. The Web would be a better place if it ran on Lua, but modern Lua arrived too late. As an example of how surface syntax has such an unreasonable hold on developer opinion, Lua indexes from 1, not 0. This is apparently a problem. Besides, it does not have curly braces, which are an obvious marker of good language design.

Fortunately, modern C++ does have closures ('lambdas') and this has made me a happier man.

[7] Personally, this would be a deal breaker, because it would mess with my flow. Everyone has different strengths and patience isn't one of mine. I spent too much energy railing against the situation in C++, but learned to accept given reality when I got better hardware. But I still miss Delphi.

Sunday, 31 May 2015

What can C++ Libraries Learn from Lua?

The Convenience of Text Pattern Matching

const char *text = "Some people, when confronted with a problem, think \"I know, I'll use regular expressions.\". Now they have two problems.";

(Example for std::regex from http://http://en.cppreference.com/w/cpp/regex)

Parsing text with Regular Expressions is a powerful technique traditionally available with so-called 'scripting' languages, from the grand-daddy Awk through Perl onwards. Regular expressions are first-class citizens of these languages and their users get adept at using them. Of course, there can be too much of a good thing, and sometimes those users forget that they have perfectly good if-statements for making complex matches easier to write and read. This is why I like Lua string patterns precisely because they are more limited in scope. Strictly speaking, although similar, they aren't regular expressions since they lack the 'alternation' operator |. But they are a good balance between simplicity and power, which is the global virtue of Lua, together with smallness. (Lua in total weighs less than the PCRE library.)

C++ and its older brother C are not very good at text manipulation, by the standards set by these dynamic languages. So this investigation is about how we can use regular expressions for text wrangling, first in C and then in C++, and how the standard ways can be improved. My method will simply be to use the Lua string functions as a design template, whenever appropriate.

The C Way

For some obvious reasons, regular expressions are not so easy to use in C, although part of the POSIX standard and thus always available on compliant systems. There is no equivalent to regexp literals (/..../) so you have to compile the expression explicitly before you use it. You must also create a buffer to receive the positions where the match and its submatches occured in the string.

 regex_t rx;
 regmatch_t matches[1];  // there's always one match
 regcomp(&rx,"[[:alpha:]]+",REG_EXTENDED);

 if (regexec(&rx,"*aba!",1,matches,0) == 0) {
     printf("regexec %d %d\n",matches[0].rm_so,matches[0].rm_eo);
 } else {
     printf("regexec no match\n");
 }
 ...
 // explicitly dispose of the compiled regex
 regfree(&rx);

Doing something more realistic that just matching is a more involved. Say we wish to go over all the word matches with length greater than six in the above useful quotation:

 const char *P = text;
 char buffer[256];  // words are usually smaller than this ;)
 while (regexec(&rx,P,1,matches,0) == 0) {
     int start = matches[0].rm_so, end = matches[0].rm_eo;
     int len = end-start;
     if (len > 6) {
         strncpy(buffer,P+start, len);
         buffer[len] = 0;
         printf("got %d '%s'\n",len,buffer);
     }
     P += end; // find the next match
 }

Contrast with the Lua solution - note how '%' is used instead of '\' in regular expressions, with the character class '%a' meaning any char in the alphabet:

 for word in text:gmatch('%a+') do
     if #word > 6 then
         print ("got",#word,word)
     end
 end

So clearly there is room for improvement!

A Higher-level C Wrapper

I am not alone in finding classical regexp notation painful, especially in C strings where the backslash is the escape character. E.g. if we wanted to use special characters literally: "\$\(([[:alpha]]+\))". The Lua notation for this expression would simply be "%$%(%a+)%)" which is easier on the eyes and the fingers. So I've provided an option to rx_new to convert Lua notation into classical notation. It is a very simple-minded translation: '%' becomes '\', '%%' becomes '%', and the Lua character classes 's','d','x','a','w' become the POSIX character classes 'space', 'digit', 'xdigit','alpha' and 'alnum' within brackets like '[[:space:]]'. The semantics are not at all changed - these regexps only look like Lua string patterns, although mostly equivalent.

An optional POSIX-only part of llib (see rx.cin the lllib-p directory) provides a higher-level wrapper. rx_find is very much like Lua's string.find although we don't have the convenience of multiple return values.

 Regex *r = rx_new("[[:alpha:]]+",0);
 int i1=0, i2=0;
 bool res = rx_find(r,"*aba!",&i1,&i2);
 printf("%d from %d to %d\n",res,i1,i2);

 // Now find all the words!
 int start = 0,end;
 while (rx_find(r,text,&start,&end)) {
     int len = end - start;
     if (len > 6) {
         strncpy(buffer,text+start,len);
         buffer[len] = 0;
         printf("[%s]\n",buffer);
     }
     start = end;  // find the next match
 }
 ...
 // generic disposal for any llib object
 unref(r);

The need for an extra 'matches' array has disappeared and is now managed transparently by the Regex type. It's pretty much how a Lua programmer would loop over matches, if string.gmatch wasn't appropriate, except for the C string fiddlng - which is essential when you specially don't want to allocate a lot of little strings dynamically.

Here is a cleaner solution, with some extra cost.

 int j1=0,j2;
 while (rx_find(r,text,&j1,&j2)) {
     str_t word = rx_group(r,0);
     if (array_len(word) > 6) {
         printf("%s\n",word);
     }
     unref(word);
     j1 = j2;
 }

rx_group will return the indicated match as a llib-allocated C string; could have used our friend strlen but array_len is going to be faster, since the size is in the hidden header; I've put in an unref to indicate that we are allocating these strings dynamically and they won't go away by themselves.

So, some code for extracting 'NAME=VALUE' pairs in a string separated by newlines (the ever-flexible C for-loop helps with not having to do j1=j2 at the end of the loop body, where these things tend to get forgotten, leading to irritatingly endless loops.)

 Regex *pairs = rx_new("(%a+)%s*=%s*([^\n]+)",RX_LUA);
 str_t test_text = "bob=billy\njoe = Mr Bloggs";
 for (int j1=0,j2; rx_find(r,test_text,&j1,&j2), j1=j2) {
     str_t name = rx_group(r,1);
     str_t value = rx_group(r,2);
     printf("%s: '%s'\n",name,value);
     dispose(name,value);
 }

This is easier to write and read, I believe. Since the loop counters are not used in the body of the loop, and since this is C and not C++, you can write a macro:

 #define FOR_MATCH(R,text) (int i1_=0,i2_; rx_find(R,text,&i1_,&i2_; _i1=i2_)

There are also functions to do substitutions, but I'll leave that to the documentation. llib is linked statically, so using part of it incurs little cost - I'd estimate about 25Kb extra in this case. You will probably need to make normal copies of these strings - a general function to copy strings and other llib arrays to a malloc'd block is here:

 void *copy_llib_array (const void *P) {
     int n = array_len(P) + 1;   // llib always over-allocates
     int nelem = obj_elem_size(P);
     void *res = malloc(n*nelem);
     memcpy(res,P,n*nelem);
     return P;
 }

News from the C++ Standards Committee

Some may think I have anti-C++ tendencies, but a professional never hates anything useful, especially if it pays the bills. So I was interested to see that regular expression support has arrived in C++.

 #include <regex>
 using namespace std; // so sue me
 ...
 // 'text' is the Useful Quote above...
 regex self("REGULAR EXPRESSIONS",
         regex_constants::ECMAScript | regex_constants::icase);
 if (regex_search(text, self)) {
     cout << "Text contains the phrase 'regular expressions'\n";
 }

That's not too bad - the Standard supports a number of regexp dialects, and in fact ECMAScript is the default. How about looking for words longer than six characters?

 regex word_regex("(\\S+)");
 string s = text;
 auto words_begin = sregex_iterator(s.begin(), s.end(), word_regex);
 sregex_iterator words_end;

 const int N = 6;
 cout << "Words longer than " << N << " characters:\n";
 for (sregex_iterator i = words_begin; i != words_end; ++i) {
     smatch match = *i;
     string match_str = match.str();
     if (match_str.size() > N) {
         cout << "  " << match_str << '\n';
     }
 }

Perhaps not the clearest API ever approved by the Standards Committee! We can make such an iteration easier with a helper class:

class regex_matches {

 const regex& rx;
 const string& s;

public:

 regex_matches(const regex& rx, const string& s): rx(rx),s(s) {
 }
 sregex_iterator begin() { return sregex_iterator(s.begin(),s.end(),rx); }
 sregex_iterator end() { return sregex_iterator(); }
}; ....
 regex_matches m(word_regex,s);
 for (auto match : m) {
     string match_str = match.str();
     if (match_str.size() > N) {
         cout << "  " << match_str << '\n';
     }
 }

We're finally getting to the point where a straightforward intent can be expressed concisely and clearly - this isn't so far from the Lua example. And it is portable.

Substituting text according to a pattern is a powerful thing that is used all the time in languages that support it, and std::regex_replace does a classic global substitution where the replacement is a string with group references.

Alas, there are some downsides. First, this does not work with the GCC 4.8 installed on my Ubuntu machines, but does work with the GCC 4.9 I have on Windows. Second, it took seven seconds to compile simple examples on my I3 laptop, which is an order of magntitude more than I expect from programs of this size. So, in this case, the future has not arrived.

A C++ Wrapper for POSIX Regular Expressions.

Portability is currently not one of my preoccupations, so I set out to do a modern C++ wrapping of the POSIX API, in a style similar to llib-p's Regex type. (Fortunately, the GnuWin32 project has made binaries for the GNU regex implementation available - although they are only 32-bit. The straight zip downloads are what you want, otherwise you will probably have unwelcome visistors on your machine.)

When testing this library on large-ish data, I received a shock. My favourite corpus is The Adventures of Sherlock Holmes from the Gutenberg project; just over 100,000 words, and this regexp library (from glibc) performs miserably to do something that is practically instantaneous in Lua. (std::regex is much better in this department.) So I've taken the trouble to extract the actual Lua pattern machinery and make it available directly from C++.

Let me jump immediately to the words-larger-than-six example. It is deliberately desiged to look like the Lua example:

 Rxp words("%a+",Rx::lua);  // simple Lua-style regexps
 for (auto M:  words.gmatch(text)) {
     if (M[0].size() > 6)
         cout << M[0] << "\n";
 }

When modern C++ cooks, it cooks. auto is a short word that can alias complicated types - (here just Rx::match), but the range-based for-loop is the best thing since std::string. And I've got my order-of-magnitude smaller compile time back, which is not an insignificant consideration.

(if you want to use the Lua engine, then replace the first declaration with simply Rxl words("%a+").)

I resisted the temptation to provide a split method; heaven knows the idea is useful but doesn't have to be implemented that way. It would return some concrete type like vector<string> and it would introduce a standard library dependency other than std::string into the library. Rather, Rx::match has a template method append_to which will take the results of the above iteration and use push_back on the passed container:

 vector<string> strings;
 words.gmatch(text).append_to(strings);

If you wanted a list<string> instead, then it trivially happens. You can append to a container and so forth without awkward splicing.

What would it mean to populate a map with matches? I don't think there's one answer, since there is no clear mapping, but here is one way of interpreting it; the expresson must have at least two submatches,and the first will be the key, and the second will be the value:

 Rx word_pairs("(%a+)=([^;]+)",Rx::lua);
 map<string,string> config;
 string test = "dog=juno;owner=angela"
 word_pairs.gmatch(test).fill_map(config);

Again, this will work with any smart array, not just std::map, as long as it follows the standard and defines mapped_type. The user of the class only pays for this method if it is used. These are considerations dear to the C++ mindset.

I'm an admirer of Lua's string.gsub, where the replacement can be three things:

  • a string like '%1=%2', where the digits refer to the 'captured' group (0 is the whole match)
  • a lookup table for the match
  • a function that receives all the matches

The first case is the most useful. Here we have a match, where the submatch is the text we want to extract (called 'captures' in Lua.)

 Rx angles("<(%a+)>",Rx::lua);
 auto S = "hah <hello> you, hello <dolly> yes!";
 cout << angles.gsub(S,"[%1]") << endl;
 // --> hah [hello] you, hello [dolly] yes!

With the second case, I did not want to hardwire std::map but defined gsub as a template method applicable to any associative array.

 map<string,string> lookup = {
    {"bonzo","BONZO"},
    {"dog","DOG"}
 };
 Rx dollar("%$(%a+)",Rx::lua);
 string res = dollar.gsub("$bonzo is here, $dog! Look sharp!",lookup);
 // --> BONZO is here, DOG! Look sharp!

Forcing the third 'callable' case to overload correctly doesn't seem possible, so it has a distinct name:

 // we need a 'safe' getenv - use a lambda!
 res = dollar.gsub_fun("$HOME and $PATH",[](const Rx::match& m) {
     auto res = getenv(m[1].c_str());
     return res ? res : "";
 });
 // --> /home/user and /home/user/bin:/....

(One of the ways that GCC 4.9 makes life better is that generic lambdas can be written, and the argument type here just becomes auto&.)

In this way, string manipulation with regular expressions can be just as expressive in C++ as in Lua, or any other language with string pattern support.

The files (plus some 'tests') are available here, but remember this is very fresh stuff - currently in the 'working prototype' phase.

Transferable Design

The larger point here is that dynamically-typed languages can provide design lessons for statically-typed languages. C++ and C (particularly) are poor at string manipulation compared to these languages, but there is nothing special here about dynamic vs static: it is a question of designing libraries around known good practices in the 'scripting' universe and using them to make programming more fun and productive in C and C++.

There is a limit to how far one can go in C, but it's clear we can do better, even if llib itself might not feel like the solution. C is not good at 'internal' iterators since function pointers are awkward to use, and so 'external' iterators - using a loop - is better for custom subtitutions, etc.

C++ has already got a standard for regular expressions, but it will take a while for the world to catch up: we are being hammered here by heavy use of templates, all in headers. This is of course a classic example of too much of a good thing, because generic code is how C++ escapes the tyranny of type hierarchies, by compile-time static duck-typing. (The fancy word here is 'structural typing'.) For instance, Rxp::gsub can use anything that looks like a standard map.

Even there, I wanted to present an alternative design, not necessarily because I want you to use it, but because looking at alternatives is good mental exercise. The Standard is not Holy Scripture, and some parts of it aren't really ready, in a pragmatically useful way. In the comfort of our own company repository I can choose a solution that does not damage our productivity.

Saturday, 16 May 2015

shmake - A Shell-based Build Tool

Beyond Make

Make is a marvelous tool in the Unix tradition: it does its one thing well. But modern software leans on build tools much more heavily, for conditional configuration and deployment, as well as for only rebuilding files when the files they depend on change. If Make was sufficient, then it would not be constantly being reinvented.

Make has always assumed that all the other Unix tools would be available, which makes cross-platform hard. So other build systems have appeared, either native build file generators (like CMake generating makefiles and MSBuild files) or directly tracking dependencies and running the compiler, like SCons or my own Lake.

shmake does not try to be cross-platform; its expressive power comes from the Unix shell. In fact, shmakefiles are shell scripts, except they are not executed standalone - rather they are run from within shmake itself. What it does is use shell as its DSL for expressing builds, rather than extending Make with shell-like constructs (like GNU Make) or inventing its own language (like CMake).

Assume a program consists of two source files and a header; here is a shmakefile for building it:

 #!/bin/sh
 . /tmp/shmake.sh

 COMPILE='gcc -c @(INPUT) -o @(TARGET)'
 LINK='gcc @(DEPS) -o @(TARGET)'

 T hello.o hello.c common.h "$COMPILE"
 T common.o common.c "$COMPILE"
 T hello hello.o common.o "$LINK"
 all hello

shmake thinks very much like Make; there are targets, but they're explicitly indicated by T: the target hello.o is the result of applying a compile command to hello.c, with common.h as an extra dependency. The object files are linked to give a target hello, and finally there's a special 'all' target which depends on hello. The commands use @() for variable expansion; for the first target, TARGET is hello.o, INPUT is hello.c and DEPS would be 'hello.c common.h'. Once the shmakefile is run within shmake, the target dependencies are checked and the tools invoked when a target is older than any of its inputs.

By itself, this isn't an improvement over Make. In particular, manually tracking dependencies (that hello.o depends also on common.h) is tedious and hard to get right. The point of this "hello build' is to emphasize that shmake is not limited to building programs, and is good for general dependency-based programming.

Convenient Compilation

For this simple program, the following shmakefile will do all the above, and more:

 simple$ cat shmakefile
 #!/bin/sh
 . /tmp/shmake.sh

 C hello hello.c common.c

 simple$ shmake
 compiling common.o
 compiling hello.o
 linking hello
 simple$ touch common.h
 simple$ shmake
 compiling hello.o
 linking hello

Note that shmake has detected the dependency of hello.c on common.h!

By default, shmake is not a chatty program (and the -q flag will make it even less chatty). To see the executed commands, use -v for verbose:

 simple$ shmake clean
 simple$ shmake -v
 gcc -c -Wall -MMD -O2 common.c -o common.o
 gcc -c -Wall -MMD -O2 hello.c -o hello.o
 gcc common.o hello.o -Wl,-s -o hello
 simple$ shmake clean
 simple$ shmake -v -g
 gcc -c -Wall -MMD -g common.c -o common.o
 gcc -c -Wall -MMD -g hello.c -o hello.o
 gcc common.o hello.o  -o hello

These are fairly generic flags (never compile C without -Wall; C's warnings are what most other languages would consider errors!) except for -MMD. This asks gcc to write a .d file, which shmake reads to build up target dependencies.

 simple$ cat hello.d
 hello.o: hello.c common.h

So 'C hello hello.c common.c' is a useful shortcut - under the hood, it expands to exactly the first explicit version. Note that since shmake is tracking all created targets, it knows how to construct a default 'clean' target, and the '-g' flag causes a non-optimized debug build to be made. If you pass a target name with extension .a, then C will build a static language; if you pass .so, it will build a shared library.

The implicit FAQ at this point might be:

  • OK, that's convenient. But can these shortcuts be composed?
  • This is a trivial program. How to make it link against libraries?
  • What if I have my own special flags?

First, they are composable because they create targets, and targets can be made to depend on them, just as in Make.

 C first first/*.c
 C second second/*.c
 all first second

Second, the 'C' command supports the usual -I, -L -l flags which are passed to the compile or the link stage. But the 'S' command is more flexible and easier to read. 'S cflags' and 'S lflags' can be used to set custom compiler and link flags, which answers the third point. 'shmake CC=clang' works as expected - name-value pairs become environment variables passed to the script.

Here is a complete shmake file for building Lua - compare to the original makefile.

 #!/bin/sh
 . /tmp/shmake.sh

 LUA_LIB=liblua52.a
 S defines LUA_COMPAT_ALL
 S exports true
 S libs readline
 S libs m

 echo "Building Lua for $PLAT platform"

 case $PLAT in
 freebsd) 
     S defines LUA_USE_LINUX
     ;;
 Linux)
     S defines LUA_USE_LINUX
     S libs dl
     ;;
 ansi)
     S defines LUA_ANSI
     ;;
 posix)
     S defines LUA_USE_POSIX
     ;;
 solaris)
     S defines LUA_USE_POSIX LUA_USE_DLOPEN
     S libs dl
     ;;
 darwin)
     S defines LUA_USE_MACOSX
     ;;
 *)
     echo "unsupported platform $PLAT. One of freebsd,Linux,Darwin,posix,ansi,solaris"
     exit 1
     ;;
 esac

 C $LUA_LIB *.c -x 'lua.c luac.c'
 C lua lua.c $LUA_LIB
 C luac luac.c $LUA_LIB

 all lua luac

This is definitely easier to read than the original. One reason is the 'C' commands support wildcards, and the very useful -x,--exclude flag. Another is how the 'S' command makes conditional use of libraries and includes easier. The original makefile had to shell out make for each case; here we actually have a case statement since we have a real scripting language.

It is straightforward to get a debug build, just clean and use -g. It is also easier to modify. If the first line is replaced with:

 LIB_NAME=liblua52
 if [ -n "$BUILD_SO" ]; then
     LUA_LIB=$LIB_NAME.so
 else
     LUA_LIB=$LIB_NAME.a
     S exports true
 fi

then 'shmake BUILD_SO=1' will make you a shared library. (It will fail to build luac, which needs to be statically linked, but everything else works.)

Scripting with C

Our computers have been steadily getting more memory and faster I/O, even if individual core speeds have stalled. However, C has not been getting bigger. So on modern machines, C programs compile & link very quickly, even without tcc. Of course tcc is convenient because it can skip the irritating compile/link cycle. There is still that irritating boilerplate before you can write code. When testing llib, I found this shmakefile useful:

 c-script$ cat shmakefile
 #!/bin/sh
 . /tmp/shmake.sh

 # simple C script!
 S quiet true

 cfile=$P.c
 sfile=$P.s

 T  $cfile $sfile "
 cat >@(TARGET) <<block
 #include <stdio.h>
 #include <stdlib.h>
 #include <llib/all.h>
 #include <llib/template.h>

 int main(int argc, char **argv)
 {
 #line 1 \"@(INPUT)\"
 block
 cat @(INPUT) >> @(TARGET)
 cat >>@(TARGET) <<block
     return 0;
 }
 block
 "

 # llib needs C99 support
 C99 -g $P $cfile -n llib

 T all $P "./$P $A"

 c-script$ cat hello.s
 FOR (i,argc) printf("hello, world %s\n",argv[i]);
 c-script$ shmake P=hello A='one two three'
 hello, world ./hello
 hello, world one
 hello, world two
 hello, world three

Again, the shell script is doing all the work - 'hello.c' is a target which is generated by inserting 'hello.s' into some boilerplate. The compile step then creates the target 'hello' from 'hello.c', and finally the default target 'all' causes the program to be compiled and run. On this modest laptop, which most of you would scorn as a development platform, this compiles and executes practically instantly.

What is this '-n llib'? It means that the program needs llib. This is a useful concept borrowed from Lake. shmake resolves the need 'llib' by checking in turn if 'llib.need' or '~/.shmake/llib.need' exist. A need file is a simple config file that allows for expansions in the style of pkg-config's .pc files:

 c-script$ cat llib.need
 prefix=../..
 cflags=-I${prefix}
 libs=-L${prefix}/llib -lllib

If shmake cannot find llib.need, it invokes pkg-config.

To make this accessible everywhere, you could have a wrapper called 'crun' that looks like 'shmake -f ~/mystuff/cscript/shmakefile P=$1' and ensure that llib.need sits in ~/.shmake, with actual prefix of installed llib.

Needs are a useful solution to finding libraries, in the same style as pkg-config, but not requiring it to be present. (pkg-config is not universal, and many packages forget to use it even if it's around.) Even within the Linux ghetto, packagers cannot agree where to put things. For instance, on Debian the tcl headers are in /usr/include/tcl8.6, the Lua headers are in /usr/include/lua5.2; other distros may simply drop them in /usr/include or elsewhere.

Short and Sweet

shmake came out of a need to scratch an itch ('why is building software still so irritating?') and the need to test llib. To make things easier, llib is bundled with the tarball. There is no special install required - just copy shmake onto your path. It was written using the time-honoured Barbarian-in-a-Hurry (BIAH) methodology so there isn't much care about memory leaks. It may occaisionally try to eat your homework.

If nothing else, it is a concrete example of using shell as an embedded scripting language. Which is the the point I want to emphasize; these are shell scripts. There are many good resources on how to learn shell, and shell has many uses outside the niche of building, unlike gmake. If you have limited time to master things (and this is usually true these days) then it is better to master shell than gmake, precisely because it is more universally useful, and shmake makes that choice easier.

Monday, 4 May 2015

Pragmatic Modern C++ Survival Guide

Compiling: Brute Force Helps

C++ has a reputation for being the slowest mainstream language to compile, leading to light sabre office duels. A certain creative boredom can set in, when people start chasing the dream of the Suceessor to C++: "The creation myth of Go is something like this: Rob Pike, Ken Thompson, and Robert Griesemer were waiting for a particularly long C++ build to take place when they decided to theorize a new language" Zack Hubert. It was also a reason why I blew several years of my life on an interactive C++ interpreter.

The fact is, that C++ compilers are not slow. It is the incredible amount of inlined code that the compiler must digest before it can give you the proverbial Hello World application. The six or so lines of code pull in more than 17,000 lines of include files. The culprit here is actually C++ hanging onto an outdated compile/build cycle that was already quaint in 1973.

So the best way to improve build times is to get the best system you can afford, with particular attention to fast disk I/O, like keeping the compiler with its headers and libraries on a SSD. And remember that you will pay for heavily optimized builds; better to test unoptimized with debug information, since then your crash traces can make sense in a debugger. But mainly, the less included code a file needs to bring in, the better.

Embrace the Future

As the quote goes, the future is already here, it's just badly distributed. The modern standard (2011) is only now being widely available on all platforms. On Windows, Visual Studio 2013 is pretty much there and the Community Edition is available free with fairly liberal licensing; GCC 4.9 is available from the excellent TDM-GCC project. On Linux, Ubuntu 14.04 has GCC 4.8, and Clang 3.4 is easily available from the repositories; on OS X Clang is now standard.

I mention these choices because it's good to get a second opinion if you're struggling with weird errors; in particular, Clang gives better diagnostics than GCC, even if you would still be using GCC as your 'production' compiler. C++ is a standardized language with a standard library, and you can use that fact.

The new language is much more fun. The standard way to iterate over a container was:

 for (vector<int>::iterator it = vi.begin(); it != vi.end(); ++it) {
     int ival = *it;
     ....
 }

Now there is auto, much relief.

 for (auto it = vi.begin(); it != vi.end(); ++it) {
     int ival = *it;
     ....
 }

And finally, range-based for-loops.

 for (auto ival: vi) {
     ....
 }

You can even do this, thanks to std::initializer_list:

 for (auto i : {10,20,30,40}) {
   cout << i << endl;
 }

And the cost? Totally nada - the short form is completely equivalent to the first verbose form, and will work with any type that defines begin and end.

This new form of for loop is interesting, because it's the first piece of basic C++ syntax to be based on the underlying standard library. A big step, because the language does not provide batteries, just the means to create batteries. std::string is not hard-wired into the language, neither is std::map. The Go language decided to implement them in as primitives, just as Java allowed System.String to participate in operator overloading, which is otherwise not allowed. Both of these languages are reactions to C++, which achieve simplicity at the cost of generality. The unusually awkward error messages generated by C++ stem mainly from this not-baked-in philosophy, rampant function overloading, plus a possibily excessive reliance on generic programming.

auto represents a very cool idea, local type inference. It is an interesting idea that the new langauges of this century lean on this heavily, so that (for instance) idiomatic Go code has relatively few explicit type annotations. Meanwhile, the dynamic language people are adding type annotations, not necessarily for performance but rather for maintenance and better tooling. auto comes with a big gotcha ; since C++ is a value-based language, auto v = expression makes v the value type and does a copy; if you want a reference to that value, say auto& v = expression. (This burned me more than once, so I feel I ought to pass on the warning).

Move semantics are a powerful new idea. The concrete result of this is that you no longer need to worry about a function returning a std::vector, since the vector constructed in the function will be moved to the returned vector; basically its data (a pointer to some values) is moved to the new vector and zeroed out in the old vector. But the old advice to pass const &T if you just want to access a large object still stands, since C++ still copies across by value.

Parts of a system may want to keep objects from another part. This can be tricky, because we don't have a garbage collector cleaning up when the last 'owner' of the object is gone. Say a GUI object needs to keep a a persistent store object as part of its state; what must it do when it dies? It can ignore it, and assume that the persistent store people will free it one day. But how do the PSP know that their object isn't still being used? More machinery needs to be added, and so forth. (No wonder big C++ systems are like Heath Robinson/Rube Goldberg cartoons). Better to use a well-proven idea that has a standard implementation - shared ownership through smart pointers.

 class B {
     shared_ptr<A> a;

 public:
     B(shared_ptr<A> a) : a(a) { }

     int method() {
         // this is the underlying pointer
         cout << a.get() << endl;

         // otherwise behaves just like a regular pointer
        return a->method();
     }
 };

 void test() {
     shared_ptr<A> pa (new A());
     B b1(pa);   //b1 shares pa
     B b2(pa);   //b2 shared pa

     // exactly the same operation, since B's share an A
     b1.method();
     b2.method();

 } // b1 and b2 are now dead. No one left sharing pa, so it is deleted.
This is easier to show than explain - it is a way for objects to share an object, in such a way that when the last sharer goes, the shared object may be safely deleted. It works because the default destructor for B works by destructing its members, releasing the shared pointers. Smart pointers can be put into containers; when the container is destroyed, the shared pointers are again released. A note on Names: readability can be helped by typedefs:
 typedef std::vector<std::shared_ptr<A>> SharedVectorA;

Here's the new generalized typedef in action:

 template <class T>
 using SharedVector = std::vector<std::shared_ptr<T>>;

which can be used like so: SharedVector<A> my_a_list.

Dealing with Errors from the Compiler and Its Community

I must emphasize that C++ compilers, however stubborn and irritating they can be, do not embody active malice. They merely wish to share their understanding of the problem, in a verbose and pedantic way, like a person with no social skills. So the thing to do is concentrate on the first error and ignore the spew.

For example, a naive but hopeful person might think that iostreams can dump out vectors directly. Here are the first lines of the error (clang 3.4)

 for1.cpp:11:9: error: invalid operands to binary expression
 ('ostream' (aka 'basic_ostream<char>') and 'vector<int>')
 cout << vi << endl;
 ~~~~ ^  ~~

That's not too bad! Notice that the compiler uses the terminology of compilers, not humans. You have to know what a 'binary expression' is and that it has 'operands'. These terms are not part of C++, but part of how people talk about C++.

But the compiler will now go on for another ninety lines, telling you how it could not match vector<int> against any of the many overloads of ostream& << TYPE. These can be safely ignored, once you know that you can't dump out a vector directly.

GCC 4.8 makes a complete hash of it, however.

 for1.cpp:11:12: error: cannot bind 'std::ostream {aka std::basic_ostream<char>}' lvalue to 'std::basic_ostream<char>&&'
     cout << vi << endl;

Visual C++ 2010 is relatively clear, but thereafter degenerates into irrevalent confusion:

 for1.cpp(11) : error C2679: binary '<<' : no operator found which takes a right-
 hand operand of type 'std::vector<_Ty>' (or there is no acceptable conversion)

And so an important point: there is more than one compiler in the world, and some of them are more sensible than others.

We move on to the attitude of the community to errors. For instance, unlike many introductory texts, I don't bother to prefix standard types with std:: because the result is easier to read and type. You ought in any case know the commonly-used contents of the std namespace; if there is an ambiguity, then the compiler will tell you about it. The one thing you must not do is say using namespace std in a header file, because you will truly be Polluting the Global Namespace for everyone else. Anyway, this is one of the little things that can generate a lot of hot air on the Internet - it is a red flag to a certain kind of bull who believes that stylistic quirks represent major heresies. I'm reminded of Henry Higgins in My Fair Lady when he sings "An Englishman's way of speaking/Absolutely classifies him/The moment he talks/He makes some other Englishmen despise him/One common language I'm afraid we'll never get".

Like with compiler messages beyond the statement of the original error, it is a good idea to ignore random online opinion. Much better to read the classics - Dr Stroustrup himself is refreshingly pragmatic. The term 'Modern C++' has been redefined several times in the last twenty years, so remember that much earnest wisdom and advice has expired. It may be an unfashionable position, but reading blog posts and looking at Stackoverflow answers is not the way to learn a programming language like C++ properly. Go read a good book, basically. You're no longer forced to use paper.

Use a Good IDE

I suspect this will upset some bulls, because they themselves are super productive in the editors originally created by the Unix gods. Well, great for them, but I can't help feeling that the anecdotal experiences of a few very talented and focussed people does not represent universal good advice, and certainly isn't data. Personally I find the Eclipse CDT makes me more productive, and in my Windows days found Visual Studio equally effective. But I will qualify this: as a professional you should not be dependent on an environment that you cannot code outside - that's when you need a good general-purpose code editor in addition. For instance, when a program is first taking shape, a plain editor is less distracting; I don't write these articles in Word because it it is far too busy, and fails to undertand the creative writing process: first write, then edit; then format.

There is a learning curve with big IDEs, and you will have to read the manual. For instance, it is straightforward to bring a makefile-based project into Eclipse, but then you have to tell it what your include paths and defines are - it cannot deduce this from your makefile, (which is a task which many people find difficult anyway). Once it knows where to find the includes, the errors start disappearing, and the program becomes a live document, in the sense that any symbol becomes a link to its definition using ctrl-click, and so forth. If you type nonsense, the environment will start emitting yellow ink and finally red. This takes some getting used to, since it will often complain before you've finished an edit. Again, it's about filtering out irrelevant criticism. The benefits go beyond 'hyper-linking' to code completion, where ctrl-enter will complete functions and methods for you, and safe global renaming, which is useful for us who can never get names exactly right the first time. Having constant feedback about errors means that often your builds will be correct, first time.

So right tool for the job at hand, and knowing when a tool is appropriate. Investing some learning time in your tools always pays off. For instance, learn the keyboard shortcuts of your editor/IDE and you will not waste too much time with pointing your mouse around like a tourist trying to buy things in a foreign market. Learn your debugger well, because exploring the live state of a program is the best way to track down most bugs. C++ can be frustrating to debug, but GDB can be taught to display standard library objects in a clear way.

There seems to be some contradiction in what I'm saying: first I say that rich environments are too busy and distracting, and then say that they are enormously helpful. The trick is again to know what tool to use for what phase of a project.

For instance, I encourage you to write little test programs to get a feeling for language features - for this an IDE is irritating because setting up a new project is tedious. Better then to have a code editor which can run the compiler and indicate the errors. But for larger programs, you need to navigate the code base efficiently and perform incremental changes.

Good Design and Modularity

Of course, C++ does not have modules. They're likely to arrive in the coming years for the 2017 standard, but currently all we have are 'compilation units', aka 'files'. A class consists of a header file and an implementation file? Not necessarily. I think people are encouraged to keep classes separate to avoid source files getting too large. If a group of classes hunt in a pack then there's no reason why they can't live in the same file, and if nobody outside a file refers to a class, then it doesn't need to be defined in a header. Finally, if a class is just a collection of static functions, then that class is acting like a namespace and should be expressed as such - this is not Java.

Code that needs to see a class obviously needs to see its definition. A weakness of the current C++ compilation model is that such client code also needs to see non-public parts of the class definition, mostly because to create an object, whether directly on the stack or using new, it needs know how big that class is.

I am not a paranoid person so I don't worry about 'others' seeing the 'insides' of my class definition; but it does irritate me that client code depends on private members, and all classes that these members refer to. So any change in the implementaton of a class requires a rebuild of all code that refers to it, and all that code needs to pull in the private dependencies. One way around this irritation is the so-called 'Pointer to Implementation' pattern (PIMPL).

 // pimpl.h
 class Test {
 private:
    struct Data;
    Data *self;

 public:
    Test(double x, double y);
    ~Test();
    double add();
 };

Notice the private struct that isn't fully defined! So the first thing the implementation does is define that struct. Thereafter, the methods all refer to the hidden implementation pointer.

 // pimpl.cpp
 #include "pimpl.h"

 struct Test::Data {
    double x;
    double y;
 };

 Test::Test(double x, double y) {
    self = new Data();
    self->x = x;
    self->y = y;
 }

 Test::~Test() {
    delete self;
 }

 double Test::add() {
    return self->x + self->y;
 }

The cost is some extra allocation and redirection when accessing object state. But now imagine that you wish to expose a simple interface to a complicated object, then PIMPL will simplify the public interface and any pure implementation changes will not require a rebuild.

Another method which uses redirection is to define an interface:

 // Encoder.h
 #include <inttypes.h>

 class Encoder {
 public:
     virtual int encoder(uint8_t *out, const uint8_t *in, size_t insize) = 0;
     virtual int decode(uint8_t *out, const uint8_t *in, size_t insize) = 0;
 };

The derived subclasses do all the work:

 // simple-encoder.cpp
 #include "encoder.h"

 class SimpleEncoder: public Encoder {
     int encoder(uint8_t *out, const uint8_t *in, size_t insize) {
         ...
     }

     int decode(uint8_t *out, const uint8_t *in, size_t insize) {
         ...
     }
 };

 Encoder *SimpleEncoder_new() {
     return SimpleEncoder();
 }

Then the client code can get the proper subclass with a XXX_new function. This is not the most elegant way of creating subclasses, but it works and it's easy to understand; you add a new Encoder and add a reference to the builder in the file that's responsible for making new encoders. There is a little cost involved in the virtual method dispatch, but usually this will be dwarfed by the actual payload. (Again, premature optimization...).

The beauty of this decoupling is that the actual details of a particular encoder are kept within a single file, if it fits.

Good design is dividing larger systems into subsystems that are responsible for a single aspect of the problem. Networking, databases, GUI, etc are separate concerns and you make your later life easier by separating them out. And this makes for better builds on average, since modifying a subsystem implementation requires only compiling the affected files.

Monday, 27 April 2015

llib - a Set of Useful C Libraries

C: Why still Bother?

"Programmers waste enormous amounts of time thinking about ... the speed of non-critical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

Donald Knuth

I'll start with the classic Knuth quote in context: even when a program is CPU-intensive, it turns out that usually there's a small part of that program that actually needs to be fast and efficient. This has implications for how we use more low-level languages like C.

It's common to come across the postion that C has been superceded by C++. Since modern C is a kind-of subset of C++, the feeling is that one can write 'better C' and cherry-pick the convenient parts of C++, like strings and type-safe containers. Alas, but then you move instantly from writing good C to writing bad C++, at least according to the guardians of good C++ style. But other things being equal, C++ has definitely got the edge when it comes to easy-to-use containers and strings, and I've enjoyed that style enough to want it in C as well.

Personally, I enjoy C, I enjoy the fast compiles and small executables that don't have burdensome dynamic dependencies. Call me impatient, but waiting half-a-minute for a program to compile tends to clear out my active mental cache. After all, we have to work within constraints, including our own cognitive limitations. Arguing with a C++ compiler sometimes feels like wrestling with a pig, anyway. The lack of abstractive power in C becomes a strength when debugging code, while common C++ abstractions become hard to inspect.

However, C has a small standard library, so any non-trivial project tends to collect ad-hoc utility functions. Half-baked re-invented wheels litter the landscape of these systems. The clean narrative of C programs is often disturbed by outbreaks of pointer gymnastics, hard to read and maintain. This is because of not understanding Knuth's point - the 97% really doesn't have to be fast - and because higher-level coding in C always involves manual resource management. For instance, if I write a function to split a string using a delimiter into an array of strings, I will have to provide a function which cleans that array up. Any non-trivial C object will have a named destructor. For instance, in the mosquitto messaging API, we have mosquitto_sub_topic_tokenize and the corresponding mosquitto_sub_topc_tokens_free. POSIX regular expressions haveregcomp and regfree, and so forth.

Automatic storage (arrays allocated on the stack) have a powerful appeal since they're fast (no overhead!) and forgiving (they go away!). But how big is your stack? I don't know, and with recursion no-one knows. This is classic micro-optimization and is a source of joy for crackers who know how easy it is to smash the stack for fun and profit. Not to say that they are evil (I think we should reserve this term for mass murderers and the inventors of BASIC), just over-used without care. They are fast, but because they're in shoot-yourself-in-the-foot territory they should be used when code needs to be fast.

The llib project provides a set of BSD-licensed, useful libraries built around a simple object system. meant to be statically linked with your program. A good contrast would be Gnome's GLIB or Apache's APR, which are big awkward dynamic dependencies. Since this is now officially the 21st Century (many historians believe the 20th started in 1914) - it is modern C. Since we live in an imperfect world but still believe in cross-platform compatibilty, it can be compiled as C++, making it available to Visual Studio users. This will help to explain why the code is unnecessarily pedantic (like explicitly needing to cast from void* to T*. The source is also available as zipball.

Of course, deep embedded programmers, who will work with 4K RAM without obvious signs of distress, play another game altogether. This is not for them.

C with Objects

A llib 'object' is a pointer allocated with a header 'behind' the data - the block is allocated, header written, and the returned pointer is just past the header. There are no smart pointers in C, only dumb pointers with hidden secrets:

 typedef struct ObjHeader_ {  // 64 bits
     unsigned int type:14;
     unsigned int is_array:1;
     unsigned int is_ref_container:1;
     unsigned int _ref:16;
     unsigned int _len:32;
 } ObjHeader;

These objects are reference counted, so they may be easily shared by other objects - we replace the problem of ownership with the problem of sharing. For instance, llib strings are C strings which always have a terminating NUL-byte, but they have type information and must be freed using obj_unref:

 #include <llib/obj.h>
 ...
 char *s = str_new("hello dolly");
 printf("%d %d\n", strlen(s), array_len(s));  // -> 11 11
 printf("type %s is_array %d\n", obj_typename(s), obj_is_array(s));
 // -> type char is_array 1
 obj_unref(s);   // decrement the reference count

You may safely use these strings in any context expecting a C string, as long as you don't free them. Their header has type information, reference count and array size. This is useful if you have strings that contain embbedded NUL characters. The reference count starts as one, and when it becomes zero, the pointer is deallocated - this is called disposal.

 char *a = str_new("hello world");
 char **s = array_new(char*,2);
 a[0] = obj_ref(s);  // rc -> 2
 a[1] = obj_ref[s); // rc -> 3
 obj_unref(s);  // rc -> 2
 // string s is still very much alive

Stroustrup says "In C, arrays don't even know their size". These arrays do, however, which simplifies APIs that use them. For instance, str_split will return an array of strings, and array_len will tell you how big that array is. But how do we solve the pesky problem of freeing that array together with its strings?

 char **ra = array_new_ref(char*,2);
 char *t = str_new("frodo");
 ra[0] = ref(t);   // informal shortcut name for obj_ref
 ra[1] = ref(t);
 unref(ra);
 // all dead!
 char **strs = str_split("hello world"," ");
 assert(str_eq(strs[0],"hello"));
 assert(str_eq(strs[1],"world"));
 assert(array_len(strs) == 2);
 unref(strs);
 // the array and its two strings are no more...

ra is a reference array which will unref all its elements when it is disposed. If str\_split was done in the old-fashioned way, I would also have to provide 'strings_free(for cleanup) andstrings_len` (perhaps depending on a NULL-terminated array convention). So a little more intelligence in objects means less API cognitive load.

Virtual Destructors

To llibify an object is straightforward, although more tedious than C++

 typedef struct {
     const char *name;
     int age;
 } Person;
 static Person_dispose(Person *p) {
     unref(p->name);
 }
 Person *person_new(const char *name, int age) {
     Person *res = obj_new(Person, Person_dispose);
     res->name = str_ref(name);
     res->age = age;
     return res
 }
 ....
 Person *mother = person_new("Helga",52);
 char *name = str_new("Bob");
 Person *father = person_new(name,56);
 assert(obj_refcount(father->name) == 2);
 ...
 unref(mother);
 unref(father);
 // name is still alive with rc == 1

So (a) objects share other objects explicitly using refcounting (b) they may have a disposal function which is called when they are disposed; they can then hand the objects back to the system. But obviously that disposal function can do much more, like freeing resources, so this is pretty much what C++ calls a virtual destructor.

Why str_ref? Because we have no guarantee that the string will be one of ours. If it's a plain C string, statically or dynamically allocated, then we'll have to make a refcounted copy.

You still have to manage the lifetime of your shared objects, although replacing free with unref gets you half way. The other half is remembering to increment the reference count when a function or object is 'borrowing' a pointer.

It would be most cool if the reference to an object would automatically be decremented when it went out of scope. This can't be done using standard C99 but GCC has a variable attribute cleanup which will call a function when that variable goes out of scope. Clang implements this as well (as does Intel C on Linux) so this is too good not to use. llib wraps this up as a macro:

 {
    obj_scoped Person *john = person_new("john",23);
    ....
 } // john is gone!

Since objects may have custom disposal functions, this is effectively C++'s RAII pattern: people are obsessed with avoiding memory leaks, but resource leaks can be a bigger problem. It's also a way to avoid the single-return pattern that larger C functions adopt, since we now have a disposal guarantee for objects - multiple returns don't have to explicitly unref the objects falling out of scope.

This is not a new idea, of course; see "implementing smart pointers for the C programming language". I don't foreground it as much in llib because it is a non-standard feature which I cannot (yet) find a way to do in C++ (the fallback position) without resorting to actual smart pointers.

Resizable Arrays and Values

llib provides doubly-linked lists and binary-tree maps because, well, that's what we learned at school. These will aways be awkward in C; either they must store untyped pointers for you, or your types must have the necessary next/previous and left/right pointers in them. It turns out that resizable arrays are flexible and easy to make strongly-typed.

In llib, such arrays are called 'seqs' (inspired by my brief flirtation with Nim). They are essentially pointers to arrays, with a capacity, and they resize using the familar doubling method, like std::vector.

 int **ss = seq_new(int);
 FOR(i,100) {
     seq_add(ss, 10*(i+1));
 }
 int *a = *ss;  // can always access the array
 assert(a[0] == 10 && a[1] == 20);
 // now take ownership of the array
 a = seq_array_ref(ss);
 // ss is now dead;  a has rc == 1

A common idiom is to build up an array using a seq, and then use seq_array_ref to provide you with the final array, sized to fit the actual number of elements. There are also functions to insert and remove elements, although of course these may be inefficient for large arrays. But as always, "when in doubt, use brute force", as Ken Thompson said.

Associative arrays are just one of those very cool things you absolutely need. But you don't necessarily need binary search. In llib 'simple maps' are arrays of pointers, where the even elements point to strings and the odd elements point to your data. Combining this with seqs results in the smap_* family of functions.

Now it turns out with these two concepts, you can do dynamic data, or as it is pronounced in Computer Science, Lisp. For instance, JSON data maps neatly into arrays of maps of arrays and so forth. To complete the picture we simply need to 'box' primitive values as llib objects, which is a well-known strategy in languages like Java and C# which are semi-dynamic. So value_float will make a typed object out of a double value, the predicate value_is_float can be used to test the type, and value_as_float will 'unbox' the double value. The representation is very simple, just a pointer to a double, plus llib type information. With this, one can construct, read and write arbitrary JSON-style data.

I suspect that many C programmers would regard this as the kind of thing that keeps them from Java . But, this is not an essential part of using llib, just a flexible way of looking at data. You do not (for instance) need to box primitive types in a normal homogenous array, only in arrays that may contain any kind of data. The JSON parser will unbox such an array for you, if it finds that all its values are numbers.

 double *xx = array_new(double,3);
 xx[0] = 1;
 xx[1] = 2;
 xx[2] = 3;
 // VAS is short for Value Array of Strings - tolerantly accepts any objects as well
 PValue* p = VAS("one","two","three",xx);
 puts(json_tostring(p));
 // ==> ["one","two","three",[1,2,3]]
 // the other way
 PValue obj = json_parse_string("[10,20,30,40]");
 if (obj_is_instance(obj,"double")) { // check the type!
    double *aa = obj;
    FOR(i,array_len(aa)) printf("%f ",aa[i]);
    printf("\n");
 }
 // ==> 10.000000 20.000000 30.000000 40.000000

Values provide another solution to an old C problem: how to return errors. Everyone agrees that errno is kind of icky, but we can only return one value directly.

 const char *str (int i) {
     if (i == 0)
         return value_errorf("number is bad %d",i);
     char buff[20];
     snprintf(buff,sizeof(buff),"%d",i);
     return str_new(buff);
 }
 ...
 const char *res = str(0);
 if (value_is_error(res)){
     fprintf("error: %s\n",res);
 } else { // ok!
     ...
 }
 obj_unref(res); // in both cases

str always returns a llib string, but an error is a string with a different dynamic type, so that value_is_error(res) is true.

What do Programs Need?

I don't want to give a full overview of the llib API, since there's The Documentation - (I've already been taken to task for writing artcles longer than the average attention span.)

Suffice to say that a major obsession of llib is string manipulation and common file format support, since the original C API is very basic. We start with an analysis of what non-trivial programs need in library support:

  • parsing command-line arguments arg
  • reading configuration files config
  • file and path operations file
  • flexible and more expressive string handling str
  • reading tabular data table
  • reading and writing XML and JSON xml, json
  • creating documents from templates template
  • parsing structured documents using a lexical scanner scan

It would be an unusual program that used all of this, but the beauty of static linking is "you don't pay for what you don't use". XML/JSON is part of the age we live in, but (particularly for the former) I don't claim these libraries are industrial - their niche is a program needing to read simple XML configuration files, or needing to speak JSON to external programs.

The other beauty of static linking is building executables with no unusual dependencies.

What's the Damage?

This all has a cost since each object must have a eight-byte hidden header. So the memory overhead for lots of small objects can be significant.

Most C programmers (incuding myself) don't like framework fascism, You may safely use parts of llib inside a C application but you must at least use obj_unref instead of free to clean up. It may be necessary to copy strings and arrays if they later might be freed.

The main discipline needed is to religiously balance every ref with an unref; be aware that reference arrays (and other reference containers) decrement the reference count of all their elements. But adding an object to such a container doesn't automatically increase its reference count, since that must be explicit. My experience is that it's hard to do this as an afterthought - write careful leak-free code from the start if the code is going to be part of a larger application. Apparently this was the story when folks tried to package Git as a library - Mr Torvalds had written the fastest code he could, and didn't bother to clean up memory since the process wouldn't last long enough for it to matter.

Currently, llib isn't thread-safe - the allocations (obj_new) access and modify static program state. unref can bite you if you're unlucky (and eventually everyone is unlucky with threading). Because of this static state, if both a program and a shared library use llib, then they must dynamically link to the core of llib. Fortunately, this is only a few Kb.

A tricky problem remains: of reliably detecting whether a pointer is 'one of ours'. This is principally needed when deciding to make a llib copy of a passed C string - otherwise it would make these libraries awkward to use, if you had to explicitly call str_new (or simply obj_ref) when passing strings. No solution seems entirely satisfactory; on Linux I keep track of high- and low- allocated memory pointers, which appears to work, but it would be easily confused by pointers allocated with just plain malloc. On Windows the multiple heaps used by the runtime make such a simple heuristic impossible, so I resort to keeping track of every allocation. Eventually, this becomes inefficient.

A Longer Example

The utility pkg-config is a descendent of the original gtk-config, which was needed to make building GTK+ applications less painful. As you can see from the end of the linked man page, this program has been re-written a few times. Unfortunately, it has picked up some awkward dependencies along the way, especially GLIB. I suspect its last author was having too much fun, so I decided to do another rewrite in the same spirit of indulgence.

pkg-config --flags gtk+-2.0 will give you all the C compile flags needed for compiling a GTK+ program, and --libs gives all the linker flags. It works by looking up files with extension .pc, which tells you what build flags that 'package' needs, plus what packages it further depends on, or requires. It's a classic file and text heavy program that any Python programmer would regard as unsuited to the old dog C. But (sorry lads) it has to be fast, because it's been shelled out a lot in makefiles all over the Gnomosphere.

It's a bit long to include in an article, so I invite you to read it here. I've tried to write it in as literate a style as possible, So it is deliberately kept as a single 563 line program.

Although a little larger (and it's hard to find 51K large), pkgconfig has no dynamic dependencies other than libc (I've excluded Linux-specific stuff here for clarity; it does no fancy stuff so easy to recompile for Windows and so forth as well.)

 pkgconfig$ ll pkgconfig
 -rwxrwxr-x 1 user user 51268 Apr  5 13:35 pkgconfig*
 pkgconfig$ ldd pkgconfig
     libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb75f1000)
 pkgconfig$ ll $(which pkg-config)
 -rwxr-xr-x 1 root root 38592 Jul 14  2013 /usr/bin/pkg-config*
 pkgconfig$ ldd $(which pkg-config)
     libglib-2.0.so.0 => /lib/i386-linux-gnu/libglib-2.0.so.0 (0xb7625000)
     libpopt.so.0 => /lib/i386-linux-gnu/libpopt.so.0 (0xb7618000)
     libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb7468000)
     libpcre.so.3 => /lib/i386-linux-gnu/libpcre.so.3 (0xb742a000)