Math 101

Did you ever see one of those trivial sums on Facebook, which looked so easy and yet you knew there must be some trick going on? Something like:

3 + 2 × 6 = ???

Well, you’re right.

Because we, in English, read from left to right. But in math, you don’t. In math, you work certain sums out before others, and not necessarily in the same order that they appear on the page. This is a standard mathematical idea, usually called the order of precedence, Wikipedia talks about it here. So, something like a × happens before a +, and so on. Working in computing I’ve come across this a lot, and the rules in full are pretty verbose, but the basic thing is that you perform some operations before others. Left or right matters less. It matters for some things, like ÷, but mostly, no.

So, if somebody presented me with a line of computer code which said,

3 + 2 × 6

the first thing, I would probably fail its review, because it is ambiguous. In fact, it should be written as

3 + (2 × 6)

which gives the exact same result, but by introducing the brackets, you make it obvious. It’s easier for the reader to see what is going on, and that has always been my #1 aim when reviewing computer code.

The bracket says perform me first, so you can see we are no longer in the realm of left-to-right.

Once you realise that, the answer is trivial.

3 + 2 × 6 = 3 + (2 × 6) = 15

Easy, huh? But not so easy as you might think. Even Microsoft’s own calculator (on my PC) gets the sum wrong, unless I help it out by adding the brackets.

A Hot Date

Written for Fandango’s One Word Challenge (FOWC), engine.

I’m gonna post today about a component I once wrote in my professional life, which was a date engine.

For part of my professional life I was a consultant to an asset management company. Asset Managers run funds, hundrends of them. The way each fund is set up might be different. Half of it might be based on a commodity, like gold or silver, and half of it might be based on a spread of stocks in such-and-such a sector. Or, you might have a fund which is based on other funds. For example a fund might be based a third on Fund A, and two-thirds on Fund B. The possibilities are endless.

We use the term pension fund, and in fact the two are very similar ideas. The plan is just to invest into a fund, and allow the fund manager to grow the fund, without having to worry about too much detail. In fact, the funds we handled were popular with many pension companies.

Unlike the raw stocks or commodities themselves, each fund would only deal periodically. They tended to be long-term, rather than used by day-traders. In the simplest case, something might deal every day, but you could also have funds which dealt:

  • every Monday,
  • every other Tuesday,
  • the first Wednesday of each month,
  • the 2nd last Thursday of each month,
  • the 15th of each month,
  • once a year, on 31st December
  • once a year, on the first Friday

…and so on. The possibilities were pretty endless. Except that in practise, only a few tens of options were used.

The place I worked, they used to calculate these dealing calendars manually, for each fund, a year at a time. Just this process, over about 1,500 funds, took somebody about six months every year. And somebody else, another six months to check.

So they wanted to build a system which would automate the process. After all, it’s simple enough to have a computer just apply a set of rules and tag one date onto the last.

So I built an engine so they could do this. You needed to tell it the last dealing date, and once it had this, it would apply the rules for the fund and calculate the next dealinng date. So you can imagine, you could run this engine again and again, on each fund, to calculate dealing calendars years in advance.

The trickiest problem was in capturing all of these rules. Capturing them flexibly enough so as to allow many different types of rules to exist, but rigidly enough so the rule could be used to calculate a dealing date.

One of the oe other problem was, what if the engine came up with a non-banking day, for example a weekend, or New Year’s Day?

Plus, the funds were based in different countries/currencies. Different currencies have different ideas about what day is and isn’t a bank holiday. So straight away, you’re having to also capture bank holidays against all the currencies (although funds tended to be only set up in the main half-dozen currencies).

It had to take all of this into account, and more besides.

We got there in the end. It this sounds quite boring to you, that’s because it was. Working with banks was quite well-paid, but it wasn’t rocket science. The reason it paid well was because people had to be meticulous*. A lot of the time you would be working on something to improve a manual process. Either to make it quicker, or less error-prone (in this case, both). Banks are quite a specialised environment, in that respect. In later years, when I was hiring people, previous banking experience was always a big factor, just because it showed that somebody was used to that environment.

* some banks are more meticulous than others!

Hello, World

The other day, I posted a little bit about the technical work I used to get up to. Sharp-eyed readers will have noticed that although I talked technical, there was no mention of computer languages whatever.

In fact there are lots of computer languages, both object-oriented and not. Largely, it is the constructs that are important, the specifics come second. Once you are good at writing in one language, it’s not so difficult to learn other languages. Traditionally when a new language arrives, the first demo program is just to print the text Hello, World onto the console, hence my title.

Some people will say the exact same things about spoken languages, and in fact I got a lot more of an idea about linguistics when I learned French, than when I learned English. In English, we just said it, without necessarily knowing the technicalities. Knowing what a past participle was, say, then helped me when I then took up German.

And computer geeks will argue for hours about the best computer language, but in reality, it is horses for courses. Some languages are good for some things, others for something else. Plus, there is a Darwinian pattern of natural selection going on – the most useful languages rise to the top and the least useful sink to the bottom. Programmers have their favourites, sure, but this is mostly like somebody saying they prefer blue to red.

My first project was in a procedural environment, using a language called Fortran. Fortran taught me the basics: if this then do that, and so on. So I had a feel for the structure. In Fortran, as in many other languages, you wrote the code in small sections, then checked the syntax by compiling the code. We would then link these pieces together to make the program itself. A program might be anything from one to hundreds of these sections. Because of this, Fortran is a compiled language. Not all languages are – these others tend to be looser, and are called interpreted languages – there is an engine which interprets each line of code, as the program runs. With the benefit of experience, compiled languages are far superior, because they allow the developer to find many errors far sooner in the process. The tool I use now even compiles things while I am typing, so I pick up silly errors like typos even sooner. Bottom line, the sooner an error is found, the cheaper it is to fix.

My next project was in an object-oriented environment. PCs were just gaining popularity, but this was before the days of Microsoft Windows. The decision was made to write in an object-oriented language called C++ (C-Plus-Plus). I was sufficiently inexperienced that I had no say in the decision. C++ is also a compiled language, although at the time only a couple of compilers existed for PCs (a compiler has the job of compiling! – you need one to build anything.)

C++ was quite new to the PC environment, although it was invented (and used in other environments) at the end of the Seventies. It wasn’t invented from scratch, however. Would it surprise you to learn that it is an extension of a language called C? C was a popular computer language which had been invented about a decade earlier. Popular, in the very small programming community back in those days. C is a compiled language, too. It is not object-oriented (as you might imagine (object-orientation was synonymous with complexity) but has the advantage of requiring fewer overheads – it is small, fast and efficient. Fewer overheads were a big deal back then – this was a big reason why C was so popular. Another reason it was so popular was, well, because of its popularity. It was a good general choice of language, because people, even then, wrote add-on tools that could neatly fit into C, so people could do a lot more with their programs. And speaking of natural selection, nobody ever heard of the languages A or B!

So I learned about object-orientation and C++. And, of course, this led me to mastering C too. There were some things which C++ gave me, which I didn’t have in C, but again there were overhead benefits if I could live with the restrictions. This knowledge also gave me experience of a language called Pascal (named after physicist Blaise Pascal). It is compiled, but not originally object-oriented, although some compilers were extended to provide limited support. Pascal was important because it was very well suited for the new Windows environment, although we never used Windows seriously back then. But really, if you could pick up one language, you could pick up several.

Windows gained ground, however, and by the time I started my second job, Microsoft had produced a C++ compiler for Windows. It was part of their strategy – make the tools for developers to produce programs, and people would use them, the good ones at least. The winner? Everybody ends up using Windows.

And so it continued through the Nineties. Computer hardware got better, Microsoft’s compilers became more functional, Windows got more functional too, to the point where it became the industry standard. C++ also got more functional, again becoming the dominant language for Windows development. But at the same time, it kept its C roots. Being pulled in both directions, C++ became more messy.

So at the turn of the century, Microsoft invented a new layer to sit on top of Windows. They called this layer the .NET Framework (dot Net), and if you own a Windows PC, you might even have installed it (although a lot of programs install it silently for you). Further, they invented a “new” language, called C# (C Sharp). C# was very heavily based on C++, compiled, of course, and object oriented. And a darned site tidier – they got rid of a lot of the sprawl by losing many of the roots of C.

And so we get to 2020. .NET and C# are both mature and still pretty much standard for Windows development. Each has gone through several iterations, just like the iPhone 10 became the iPhone 11. C# is my language of choice these days, and when I write next time on this subject, I’ll try and marry up some of the things I mentioned last time, with the C# language.