Category Archives: Things Explained

Vaccination ‘Hesitance’ Put Bluntly

Question: Do you know anyone who’s had smallpox? How about polio? No?Polio Vaccine

Well no wonder there’s a sudden surge of doubt about vaccination – not because we’ve realise how well it works, but because we’ve forgotten what life was like before vaccines.

I am a parent. I can understand the idea of injecting my child with something that turns out to be harmful would be hard to bear. First do no harm, they say. Can we be sure there is no harm?

Sounds a like a fair question… but… isn’t. It’s not that simple.

Having a vaccine may seem like a dangerous intervention, and surely no action is better than a risky one?

Well if you believe that, you are not alone, but are making a cognitive error, the error to assume inaction is a virtue. As Haile Selassie said “Throughout history, it has been the inaction of those who could have acted; the indifference of those who should have known better; the silence of the voice of justice when it mattered most; that has made it possible for evil to triumph.”

In order to help those stuck in this cognitive trap, I would ask them to consider this thought experiment.

BluePillRedPill

Rather than choosing to go to a doctor for a vaccine, say the famous ‘MMR’ triple-jab, or choosing to stay at home, let’s imagine the choice was one of two medications, one the MMR, with its supposed* risk of autism, but which gives immunity to measles, mumps and rubella – and the other is a compound with no known benefits, but a known risk of causing measles, mumps and rubella. And of course the added possibility of causing an epidemic at the same time.

(*) Let’s add to the mix the fact that there is zero evidence of any link between MMR and autism, and also point out that failing to immunise your kids is reckless cruelty not only to them, but also to all those who cannot be vaccinated for ‘real’ reasons, such as being too poor, too young or too sick.

To those still on the fence, this last bit is for you.

Do you not realise that what seems like concerned parenting has actually cost real people their lives? For what?

Yes, drugs go wrong. Yes, corporations are often tempted to hide bad news. Yes, the human body is complex and we do not understand every last detail.

But if you think that those pressures overwhelm the ‘good’ in mankind that has led life expectancy to climb gloriously every single decade this last century, then you are missing all the good news.

smallpoxNo small part of the doubling in life expectancy was due to vaccines.

Please don’t take me with you back to 1900. Rather look at this picture of what smallpox does and thank your lucky stars she’s not your child.

Negative pressure: impossible surely!!?

two_tall_treesI read some comments on Scientific American today that instantly made my blood boil. Or cavitate at least.

It was an explanation of how tall trees get water right up top. No I never thought about that before either.

water-boreholeAnyhow, anyone who’s drilled a borehole knows you can only suck water up 33 ft before you get a vacuum forming, water boiling and general pumping failure. Hence the need to put a pump at the bottom of a deep borehole.

Now, I had always thought capillary action was what sucked water up plants, handily bypassing this issue, and there, right in the comments, it was asserted that this was a ‘common misconception’…

What, me wrong!? Never!

After the shock, I did what a good scientist is supposed to do, fighting the desire to simply namecall, I watched her darn video.

I remained skeptical. Very skeptical. I again overcame desires to write rude comments in youtube and went a read up on it properly…

================================

Ok, so it turns out that there is some sort of truth to it: indeed some clever people believe water can be ‘sucked’ to the top of tall trees, which does indeed require negative pressure.

So I ask, why won’t the water boil? Boiling-Water

Because, they say, it’s ‘meta-stable’. Like super-cooled water, or superheated water, water can supposed go to ‘tension’ without boiling if only you can prevent that initial bubble forming. Simple!

A little more thinking and internal wrangling, and I slowly conceded it just might be. Yes, ok, negative pressure is not really all that radical, it is essentially tension. It’s common in solids, it’s just the idea that water can be ‘tense’ that is difficult to get one’s head around.

So, the process had begun; I started to consider that maybe I was wrong. It’s not pleasant folks, and I am not trying to beat my own drum, I am sure there are plenty of other times when I’ve failed this test, it was just interesting because here I think I passed it…

Anyway, back to the point. Alas, I then read even more deeply, that though I find myself agreeing that water can indeed be under tension, and that sort of does mean negative pressure, I’ve yet to be convinced that ‘wicking’ it not at least involved in tree sustenance. Anyone who has dropped a dry cloth in water knows the water climbs into the fabric.

Furthermore, if there was negative pressure in the tree’s ‘pipes’ why wouldn’t they collapse?

It took deeper digging, but now all my cognitive dissonance is resolved, and I feel just fine by closing my investigation with this makeshift conclusion: that while trees do suck water up (via transpiration and the pull of surface tension in narrow openings) the pressures needed are not too crazy BECAUSE OF THOSE GOOL OL’ WICKING EFFECTS!!

Yup, I have to conclude that the attraction of the fluid for the xylem walls helps ‘keep the water up’ and thus preventing it from pulling too hard on the water above it.

It turns out this is what many others think [great minds for sure], and some [’nuff respect] took the steps of building a pressure probe small enough to poke into a plant’s pipework. What they found supports my newly cherished (but alas already 120-year-old) Cohesion-Tension theory of tree hydration.

In other words, while wicking (capillary action) is not a sole actor, it is there in a critical supporting role. Aaah that’s better, as you can see I wasn’t totally wrong 😉

PS. On the other hand, negative pressures seem to be a new and reproducible fact for us to worry about!

Musical Notes Explained Simply

Have you ever wondered how the musical notes we use were chosen?

I mean when I was growing up I was learning one thing in music class  (do-re-me-fa-so-la-ti-do!) and another in science class (440Hz) and never the twain did meet…

So what gives? I always suspected the musical community were being scientific, but their language was all Greek to me.

Years passed and only rarely did I get the chance to wonder at this question – and meantime my science education was getting the upper hand – I learned how sounds travel through the air and how the ear works – how deep, low notes are the result of compression waves in the air, perhaps a few meters apart, while higher pitched sounds where compression waves much more tightly packed, perhaps millimeters apart. I also learned a note could have any frequency, and so no reason to pick out any ‘special’ frequencies.

However,  just recently I realized, in a flash of light, that with an infinite number of notes to choose from, musicians had very deliberately selected only a few to make music with, and I suddenly wanted to know why. Was it arbitrary? Was it the same in different cultures? Why did some notes seem to go together and others seem to clash? And of course, as The Provincial Scientist, I wanted to know if our early musicians had done well in their choices.

As it is now the era of the internet I set about to find out more and thought it was so interesting, it would be a crime not to report what I learned on my blog. So here is what I learned…

In Search of Middle C

The best place to start is probably a vibrating string. The vibrating string is clearly key to pianos, harps, guitars and, of course, the entire ‘string’ section of an orchestra. If you stretch a string and pluck it, you are starting an amazing process – as you pull on the string, you create tension, you literally stretch the string and store energy in the fabric of the string. When you let go, the string shrinks under that tension, which pulls it straight. Alas, when its straight it has picked up some speed and the momentum keeps it going until the string is stretched again – thus the string swings back and forth – and it would continue forever were it not for frictional losses – energy is lost in heating the string, but some is also lost in buffeting the air around the string. The air is pushed then pushed again with each cycle creating compression waves that ripple out into the room – and into our ears. Thus we hear the string.

You can see the vibrating string doing it’s magic here:

[youtube=http://www.youtube.com/watch?NR=1&v=6JeyiM0YNo4]

You can see in the video that the string swinging back and forth is an awful lot like a wave moving up and down the string! Indeed it is!

The speed at which the wave moves (or string vibrates back and forth) – and thus the note we hear – is determined by a few simple factors – the tension in the string, and the weight of the string and the length of the string. The greater the tension, the greater the force trying to straighten the string, but the greater the weight, the more momentum there is to make it stretch out again.

It is therefore easy to get a wide range of notes from a string, start with a long, heavy wire and only tension it enough to remove all the slack. The note can then be gradually increased by decreased the length or the weight of the wire, or by increasing the tension. These are the tricks used in pianos, guitars and so on.

So far so good. But if you have several strings to tune up, what notes should you pick – from infinitely many – to make music with?

The human ear is an amazing device and can hear notes ranging anywhere from 20 to 20,000 compressions per second (the unit for per second is called Hertz or Hz for short). That is a lot of choice!

As I am sure you guessed, the key is to understand why some notes seem to ‘go together’, and the answer lies back in the vibrating string.

Overtones of Overtones

Firstly, it turns out that when you pluck a string, you actually get more than one note. While the string may swing back and forth in one elegant sweep, it may also get shorter waves, with half or a third or quarter the wavelength hiding in there too. This video shows how one spring can vibrate at several speeds:

[youtube=http://www.youtube.com/watch?v=3BN5-JSsu_4&feature=related]

Although the video shows the string vibrating at one speed each time, it is actually possible for a string to carry more than one wave at a time (this amazing fact deserves its own blog posting, but we will just accept it for now).

So when a string is plucked, the string ‘finds’ ways to store the energy with vibrations – it finds a few frequencies that carry the energy well, these are called ‘resonant frequencies’, there will be several, but they will all be multiples of one low note. As these higher notes are all multiples of a single low ‘parent’ note, they also have consistent frequency relationships between one another, 3/2, 4/3, 5/4 and many many others.

String Harmonics

So clearly, once you have one string, and you want to add a second, you could tune the second string to try to match some of the harmonics of the first string. The best match is to pick a string whose fundamental note is at 2x the frequency of the first string. This string’s fundamental note will match the first string’s 2nd harmonic (also called its first overtone). The second string’s harmonics will also perfectly match up with pre-existing harmonics from the first string. The strings are what is called consonant, they ‘go together’.

Now although the second string will have some frequencies in common with the first string, it turns out that there is an even stronger reason why these notes will go together – it is because when you play several strings at once, you are no longer just playing the strings, the instrument you are playing is the listener’s eardrum. The eardrum will vibrate with a pattern that is some complex combination of the wave-forms coming from the two (or more) strings. When you add two notes together, it is like adding two waves together and you get an interference pattern – the interference may create a nice new sound:

If we add a low note (G1) to a note one octave higher (G2) we get a totally new sound wave.

If, as in this example, one string vibrates at exactly twice the frequency of the other, the two notes will combine to make a handsome looking new waveform, with ‘characteristics’ from both the original waves – but if the frequencies are not a neat ratio, you will get something a bit messy:

This waveform may not repeat, and is unlikely to be consonant with any other notes you may care to add.

Sometimes, when your second string is fairly close in frequency to the first (say 1.1 x the first string’s frequency) then a second phenomenon rears its head, beating. This leads to the creation of entirely new (lower) frequencies that the ear can hear [click here to listen to a sample]. The sum now looks like this:

Beating can sound awful, though of course, the skilled musician can actually use it to create useful effects.

Beautiful Ratios

We have seen that once you have selected one note, you have already greatly reduced the ‘infinite’ choice of other notes to use with it – because only some will be consonant. Although the best consonances are at exactly 2x the first frequency, we see that once you have picked two strings, the choice for the third string is more limited. Should you be consonant first the first string or the second? Can you be consonant with both? You can be fairly consonant with both, but only by being 2x and 4x their respective frequencies. If you picked all your strings as multiples of the first string, the ‘gaps’ between the notes would be very big, akin to playing a tune with only every 12th key on a piano. So how can we fill in the gaps?

Well, early thinkers quickly realized that you can’t actually select a perfect set of notes – some combinations will mesh well, others will be just a little bit odd. This realization was probably a bitter pill for early musician-scientists to swallow.

In the end, they came up with many competing options, each designed  to maximise the occurrence of good ratios  – a good example is the just intonation scale:

Note: C D E F G A B C
Frequency ratio to the first note: 1 9/8 5/4 4/3 3/2 5/3 15/8 2

Here, the musician picks two notes that are consonant (C and the next C one octave higher) and then divides the gap into seven steps. Each note is a special ratio of the lower note – we get neat ratios of 5/4, 4/3 and 3/2 showing up which is good, however the ratios between adjacent notes are much less pleasing!

Aside: You will also see that the steps from B to C and E to F are rather small! Now take a look at your piano and note these notes correspond to the white keys on the keyboard that have no black keys between them! This is no coincidence…

Is the ‘just intonation’ division perfect? No, the notes are not all consonant! Remember that with 8 notes in this group, there are 7+6+5+4+3+2+1=28 ratios (or note pairs), and there is no known way to choose them to all be consonant. That is why, although most musical cultures divide their music notes into ‘octaves’ (nicely consonant frequency doublings), there have evolved many different ways to make the smaller divisions.

Western music has tended to divide the octave into 7 notes (the heptatonic scale) , you could really use any number. Let’s stick with 7 for now.

Another popular way to divide the octave is the Pythagorean tuning:

Note: C D E F G A B C
Frequency ratio to the first note: 1 9/8 81/64 4/3 3/2 27/16 243/128 2

This scale is based on prioritizing the 3/2 overlap of harmonics and moves three notes very slightly.

It is key to remember there are dozens of ways to do this, depending on what you are trying to optimise – do you want to match the greatest number of harmonics, or some smaller number of stronger harmonics? It may even be that personal taste could come into play.

The Wonderful Piano

Have you ever wondered why you hear someone is playing something in C-minor or F-major? What is the deal there? Well, these are also ‘scales’ – alternative ways to cut up the octave, but from a specific family that lives on the piano.

You see, the piano could also divide the octave into 7 notes, and indeed it was once so divided, but with time musicians realised they could open up more subtlety in their music by adding in more notes. So they decided to add the ‘black notes’, the extra black keys on the keyboard!

So in addition to the 7 notes A,B,C,D,E,F & G, they added C#, D#, F#, G# and A# – they called them ‘half tones’ or accidentals. Of course, there are already two half steps (B-C and E-F) which is why there is no B# or E#. These extra notes gave us 12 smaller steps, and of course choosing 12 consonant notes was even harder than choosing 7!

So, after some hard thinking by scholars including  J.S. Bach, a very sensible decision was made – to divide the octave into 12 ‘equal’ steps, which gives us the so-called ‘equal temperament‘, the most popular way to tune a piano. To do this, each note is 21/12 or 1.05946… times higher in frequency than the last one, such that twelve steps will eventually give you a doubling.

However, our musical notation is older than the piano and generally only allows for 7 notes per octave, so how do you write music for 12?

Despite that there are 12 notes, composers have tended to still feel some combinations of 7 notes ‘go together’ better than others and so have persisted to write music using only 7 notes, though of the many hundred’s of ways you could choose the 7 notes, they have selected 12 combinations, the 12 “Major scales“:

The Major Scales (down the left). Each uses only 7 of the 12 notes on the piano keyboard. The shaded vertical lines correspond to the black keys on the piano.

Personally, realising what these scales were was a breakthrough for me. Looking the above map helped me to realize several things:

  1. Many long pieces of music will completely ignore nearly half (5/12ths) of the keys on the piano! To play a tune based on a certain ‘scale’ is sometimes said to be played in that ‘key‘.
  2. The scale of C-Major ignores all the black keys, and is probably the oldest/original scale.
  3. Each scale is displaced 4 ‘steps’ from the previous scale (there is a #1 beneath each #5). This 1st to 5th note relationship turns out to be important.

Aside: Note that there are also the 12 “minor scales“. These scales actually use the same 12 subsets of keys as the major scales, but are ‘shifted’  – they have a different starting point (base note, or ‘tonic‘).  This may seem a trivial change, but because the gaps (steps in frequency) are not all evenly sized in these scales, the major and minor scales have their two ‘small’ steps in different places, which is supposed to change the feel or mood of the music (or even the gender!)

The Number 5

The number ‘5’ in the pattern we saw above (5th note) was noticed by musicians long before me, and it shows up in other places too.

For example, we saw in the ‘just intonation’ scale above, that the note G had a frequency ratio of exactly 3/2 with the note C. This means that when you hear both together, every third vibration of the higher note will coincide with every second vibration of the lower note. They are thus highly consonant – and they are 4 steps apart on the stave.  This relationship is called the ‘perfect 5th‘. It is again no coincidence that the 5th note of each scale is has a frequenxy eactly 50% higher than the 1st and is the 1st base note (aka tonic) of the next scale. Stepping in 5th’s (ratios of 3/2 in frequency) 12 times takes you through exactly 5 octaves and eventually back to the first scale.

This cycling behavior allowed the invention of a learning tool called the ‘circle of fifths‘, which helps us to understand  the relationships between the scales.

Yet another aside: The ‘perfect fifth’ is called perfect if it is truly a ratio of 3/2 – but recall that pianos have their 12 notes ‘evenly spaced’ (a geometric progression) so the ratio of G to C on the C-Major scale will not be exactly 3/2 – it is actually 0.113% off!

But What About Middle-C?

Ok, so we have seen how some notes ‘go together’, and how once you have one note, you have clever ways to find families of notes that compliment that note – but that leaves just one question – how do we pick that first note?

The leading modern convention is use the note A that comes after (above) middle-C, and to set it at 440Hz exactly.

The question is, why?

Well firstly, I shall point out that the 440Hz convention is not fully accepted. For example, anyone who wants to hear, for example, the Gregorian chants the way they originally sounded, would need to use the conventions of the time. Thus there are pockets of musical tradition that do not want to change how their music has always sounded.

However, when it comes to performing a concert with many instruments, it is useful if they all adopt the same standard. The standard is thus sometimes called the concert pitch, and though 440Hz for A is common, this number has been seen to vary from 423Hz to as high as 451Hz.

So the short answer is, there is no really good reason; the choice of 440Hz really just ’emerged’ as a more common option, and when they standardized they rounded it off. While this answer is ultimately trivial, I find a little amusement in the fact that all the music we hear sounds the way it does for no particular reason!

Conclusion

Before I go, there is a video I want you to look at. I think it shows beautifully how 12 different frequency oscillations can exhibit some beautiful harmony (or harmonics!)

[youtube=http://www.youtube.com/watch?NR=1&v=7_AiV12XBbI]

All Done! Ready to Read Some Music?

The next step is to learn to read musical notation – luckily someone has already written an excellent tutorial with pretty pictures.

All I can hope is that the weird things they teach you in this tutorial will be a little less weird now we have covered the baffling origins of the notes!

Jarrod Hart (Los Olivos, CA, October 2011)

===============================

A couple more useful references:

http://www.mediacollege.com/audio/01/sound-waves.html

http://www.get-piano-lessons.com/piano-note-chart.html

http://www.thedawstudio.com/Tips/Soundwaves.html

Death by KPI. The unintelligent design of the modern company… and what to do about it.

KPI!

A beginner’s guide to KPIs…

Background, Context and all that…

Some of you luckier readers will be wondering what on earth a KPI is. Alas, many of my readers will know, and rarely will they have a happy tale to tell about them. Let me tell you mine.

You see, to me the story of the KPI is none other than the story of the modern trend to remove the human element (that most fallible of elements) from big business. I propose that there has crept up upon us, starting when we came down from the trees and now coming to its final fruition with the industrial revolution, a situation in which the workings of our society, our organisations, governments, armies or companies, are simply too complicated to be designed or managed by any one person, whatever force his or her personality might possess.

No, the time of the one big man, the head honcho, the brain, scheming away in his tower, is over – and we enter the era, nay the epoch, of the human hive.

For now we see that in order to achieve great things, it is the ability to sort and organise mankind, rather than the ability of each man on his own, that matters most.

I could wander from my thesis awhile to describe a few side effects, for example, to point out the age of ‘middle management’ is here to stay, or to philosophize about the world where no one is actually ‘driving’ and the species is wandering like a planchette on a Ouija board and how this explains why no one seems to be able to steer us away from the global warming cliff just ahead…

But no, I will not be pulled off course, I will return to our good friend, the KPI.

So it seems we have these complex organisms, such as the venerable institution that is the company, that have evolved to survive in the ecosystem that is our global economy. Money is the blood, and people, I fancy, are the cells. And just as no particular brain cell commands us, no particular person commands a company. Just as the body divides labour among cells, so does the company among its staff. We train young stem cells into muscle, tendon and nerve. We set great troops of workers to construct fabulous machines to carry our loads just as our own cells crystallize calcium to make our bones.

Having set the scene thus I must move on, for where are the KPIs in all this?

As clever as the cells of our bodies may be, to choose depending on the whims of circumstance to turn to skin or liver or fat, that is but nothing compared with the cleverness with which the cells are orchestrated to make a cohesive body, with purpose and aim,  with hopes and dreams, and of course sometimes even the means to achieve them. And the question is: how is that orchestration achieved?

Being a devout evolutionist, its is clear to me that it was by no design, but rather by the constant failure of all other permutations that led to the fabulously clever arrangement, and so it is with the organism that is the company.

It is this conviction that leads me to claim, contrary to the preaching of many business schools, that good companies are not designed, but evolve, and by a process of largely unconscious selection. Like bacteria on a petri dish, companies live or die on their choices, and by every succeeding generation, the intelligence of these choices is embroidered into the DNA of the company.

Yes, I am saying that the CEO of Microsoft, or Rio Tinto of Pfizer, is no more sure of his company’s recipe for success than any one cell in your brain is at understanding how it came to be that you can read these words. Ok, maybe that is unfair. They probably have a good shot at reproducing success in other companies, but they only understand how the company works, not why.

It is well to remember that very few of the innovations present in a modern successful company were developed within that company – the system of raising money from banks or through a system of stocks and shares, the idea of limiting liability to make these investments more palatable, of development the of modern contract of employment to furnish staff; nay the very idea that a group of people can actually get together and create the legal entity that is the ‘company’ has been developed over centuries.

Even within the typical office we see many innovations essential to run a business that could never have been ‘designed’ better by single mind – the systems to divide labour into departments – finance, marketing, sales, R&D, logistics, customer service; the reporting hierarchies and methods for making decisions; the new employee checklists, the succession plans, the new product stage-gate system, the call-report database, the annual budget, the balance sheet, the P&L – these are all evolved and refined tools that incorporate generations of brain power.

Whether it is the idea of share options or the idea of carbon copies, the list of machinery is endless and forms the unwritten DNA of the modern company. The company of today has little in common with the farmstead of 300 years ago, and indeed, just like the bullet train, it would not work too well if taken back in time 300 years. It only works in its present setting. It is part of a system – an ecosystem.

Now a business tool that has been evolving for some time, slowly morphing to its full and terrible perfection is the Key Performance Indicator or KPI.

The Key Performance Indicator

There is a saying in engineering circles: you cannot improve what you do not measure.

This philosophy accidentally leaked to the business community, probably at Harvard, which seems a great place to monetize wisdom, and so there is presently a fever of ‘measurement’ keeping middle managers in their jobs, and consultants on their yachts.

This sounds reasonable – but let’s pick at it a little.

When the engineer installs a sensor in a reactor to measure its pressure, it is usually just one element in a holistic system of feedback loops that use the measurement in real-time to control inputs to that reactor. Thus we can see that measurements in themselves are not enough, there needs to be an action that is taken that affects the measured property – a feedback loop.

Likewise any action taken in a complex system will tend to have multiple effects and while you may lower the pressure in a reactor, you may raise it somewhere else, thus the consequences of the action need to be understood.

And lastly, if the pressure in the reactor is now right, the value of the knowledge has diminished, and further action will be of no further benefit.

So just like a body, or a machine, a company needs to be measured in order to be controlled and improved, and it occurs to me that these key measurements, the KPIs, need to fulfil a similar set of requirements if they are to be of value.

A few years ago I decided to write a list of KPI must-haves, which I present here:

  1. the property measured must be (or correlate with) a company aim (eg profitability)
  2. measurements need to taken to where they can be interpreted and acted upon, using actions with predictable effects, creating a closed loop
  3. the secondary effects of that action need to be considered
  4. repeat only if the benefits repeat too

Now let’s take a look at some popular KPIs and see if they conform to these requirements.

Production KPIs

There are a multitude of KPIs used the ‘shop floor’ of any enterprise, be it a toothpick factory, a bus company, a florist or a newspaper press. Some will relate to machines – up-time metrics, % on time, energy usage, product yield, shelf life, stock turns – indeed far too many to cover here, so I will pick one of my favourites.

Availability” is a percentage measure of the fraction of time your machine (or factory or employee) is able to ‘produce’ or function. If you have a goose that lays golden eggs, then it is clear that ‘egg laying activity’ will correlate with profits, so point #1 above is satisfied, and measuring the egg laying frequency is potentially worth doing. If you then discover that your goose does not lay eggs during football matches you may consider methods to treat this, such as cancelling the cable subscription. A few trials can be performed to determine if the interference is effective, proving #2. Now all you need to do is worry about point #3; will the goose fly away in disgust at the new policy?

What about #4? Indeed you may get no further benefit if you continue to painstakingly record every laying event ad infinitum. KPIs cost time and money – they need to keep paying their way and may often be best as one-off measures; however, what if your goose starts to use the x-box?

So that is an example of when the KPI ‘availability’ may be worth monitoring, as observing trends may highlight causes for problems and allow future intervention. So surely availability of equipment is a must-have for every business!?

No. There are many times when plant availability is a pointless measure. Consider for example an oversized machine that can produce a year’s supply in 8 minutes flat. So long as it is available for 8 minutes each year, it does not matter much if it is available 360 days or 365 days. Likewise, if you cannot supply the machine with raw materials, or cannot sell all the product it makes, it will have forced idle time and availability is suddenly unimportant.

The simplest way to narrow down which machines will benefit from an availability or capacity KPI is to ask: is the machine is a bottleneck?.

For other types of issues, let’s look at some other KPIs.

Quality KPIs

Quality KPIs are interesting. Clearly, it is preferable to ship good products and have happy customers. Or is it?

In any production process, or indeed in any service industry, mistakes will be made. Food will spoil, packaging will tear, bits will be left out. It is now fashionable to practice a slew of systems designed to minimise these effects: to detect errors when they are made, to re-check products before they are shipped, to collect and collate customer complaints and to feed all this info back into an ever tightening feedback loop called ‘continuous improvement‘.

KPIs are core to this process and indeed KPIs were being used in these systems long before the acronym KPI became de rigueur. To the quality community, a KPI is simply a statistic which requires optimization. The word ‘Key’ in KPI not only suggests it represents a ‘distillation’ of other numerous and complex statistics, but implies that the optimising of this particular number would ‘unlock’ the door to a complex improvement.

Thus, a complex system is reduced to a few numbers, and if we can improve those numbers, then all will be well. This allows one to sleep at night without suffering  nightmares inspired by the complexity of one’s job.

The reject rate is a common quality KPI – it may encompass many reasons for rejection, but is a simple number or percentage. It is clearly good to minimize rejects (requirement #1) and observing when reject rates rise may help direct investigations into the cause thereof satisfying requirement #2.

However, from rule #2 we see that this KPI is only worth measuring if there will be follow-up: analysis and corrective action. This must not be taken for granted. I have visited many plants that monitor reject and when asked why, they report that head office wants to know. What a shame. Perhaps head office will react by closing that factory some time soon.

The failure to use KPIs for what they are intended is perhaps their most common failure.

Another quality KPI is the complaint rate. Again, we make the assumption that complaints are bad, and so if we wish to reduce these we should monitor them.

Hold the boat. How does the complaint rate fit in with company aims? We already know that mistakes happen, but eliminating quality issues is a game of diminishing returns, so rather than doggedly aiming for ‘zero defect’ we need to determine what complaint rate really is acceptable.

So here is another common KPI trap. Some KPIs are impossible to perfect, and it is a mistake to set the target at perfection. Think of your local train service. Is it really possible for a train system to run on time, all the time? The answer is an emphatic no!

The number of uncontrolled inputs into a public transit system – the weather, the passengers, strike actions, power outages and the like will all cause delays, and while train systems can allow more buffer time between scheduled stops to cater to such issues, this type of action actually dilutes other aspects of service quality (journey frequency and duration). Add to that finally the fact that a train cannot run early so losses cannot be recovered.

The transport company will of course work to prevent delays before they occur, and lay on contingency plans (spare trains) to reduce impacts, but the costs and practicality mean that any real and meaningful approach needs to accept a certain amount of delays will be inevitable. A train company could spend their entire annual profit into punctuality and they would still fall short of perfection.

So it is with most quality issues, the law of diminishing returns is the law of the land. Thus the real challenge is to determine at what point quality and service issues actually start to have an impact on sales and cashflow. This is another common pitfall of the KPI…

The correlation between the KPI and profitability is rarely a simple positive one, especially at the limits.

Some companies get no complaints. Is this good? No, often it is not! This company may be spending too much money on QC. The solution here is to work with and understand the customer – what issues would they tolerate and how often? If you did lose some customers by cutting quality, would the financial impact be greater than the savings?

However, and on the other hand, don’t make the opposite mistake: once a reputation for poor quality is earned, it is nearly impossible to shake.

Financial KPIs

As companies become bigger, there is a tendency to divide tasks according to specialized skill and training. Thus it can happen that the management of a big mining company may never set foot in a mine, may not know what their minerals look like, nor may they know how to actually dig them up or how to make them into anything useful. In other words, they would be useless team members after a nuclear apocalypse.

However, this is no different from our brain which is little use at growing hair, digesting fat or kicking footballs.

It is thus necessary that the organism (the company) develops a system to map in a flow of information from its body to its mind and then another mapping to take the decisions made in that mind and distribute them to the required points of action.

Indeed to blast in a quarry, or to kick the football are indeed best done by organs trained and capable and no less important than the remote commanders in the sequence of events.

And just as our bodies have nerves to transmit information about the position of our lips and the temperature of the tea to our mind, so the company has memos, telephones and meetings. And KPIs are the nouns and subjects in the language used.

Furthermore, some KPIs need to be further distilled and translated from the language of the engineer (Cpk), the quality manager (reject rate) or the plant manager (units shipped) to that of the accountant (revenue) , the controller (gross margin) and eventually that of the general manager and the shareholder (ROI). This is perhaps the main duty of middle management, bless their cotton socks.

Unfortunately, the mapping of everyday activities to financial KPIs is fraught with danger. The biggest concern comes from the multiple translation issue. That is to say, KPIs can suffer from a case of Chinese whispers, losing their true meaning along the way, resulting in the worse result: a perverse incentive.

Yes, ladies and gentlemen, this does happen.

Let’s say you want to improve your cash situation. You may choose to change the terms in your sales contracts for faster payment, in essence reducing the credit you allow your customers. This may have the desired effect, lowering the KPI  called “receivables” and this looks good on the balance sheet – but let’s look at requirement #3 in the KPI “must have” list. What are the ripple effects of this move? It is clear this will not suit some of your customers, who, considering recent economic trends, probably also want to improve their cash situation; thus you may lose customers to a competitor willing to offer better terms.

And so we see the clear reason for perverse incentives is the consideration of KPIs individually instead of collectively. There has to be a hierarchy upon which to play KPIs against one another. Is revenue more important than margin? Is on-time shipping a part-load preferable to shipping “in full” a little late?

So we see again that the systems used to distill company indicators the choice of which decisions are centralized and which are localized need to be developed and constantly refined using an iterative process. The art of translating the will of shareholders into a charter or mission statement and then translating that into targets for sales, service and sustainability is a task far too complex to perfect at first attempt.

KPI Epic Fails

While on the subject of KPIs I cannot resist the opportunity to bring to mind a few fun examples of KPIs gone badly wrong.

The Great Hanoi Rat Massacre

The French administration in Hanoi (Vietnam) were very troubled by the rat population in Hanoi around the start of the last century, and knowing as they did about rat’s implication in the transmission of the plague, set about to control the population. A simple KPI was set – “number killed” and payments were made to the killers on this basis. There was immediate success with rats being brought in by the thousand and then the tens of thousand per day. The administration was pleased though somewhat surprised by the sheer number. There surprise gradually transformed into disbelief as time wore on and the numbers failed to recede.

You guessed it. The innovative residents of Hanoi had started to breed rats.

The Magic Disappearing Waiting List

Here’s a more recent example from the National Health Service in the UK (the NHS).

A health service is not there to make a profit, it is there to help the population, to repair limbs, to ease suffering, to improve the length and quality of life – and to do this as best it can on a finite budget. So the decisions on where to invest are made with painstaking care – and needless to say, KPIs are involved. Not only big picture KPIs like life-expectancy, or cancer 5-year survival rates, but also on service aspects, such as operation or consultation waiting times.

It will therefore not be surprising to you to learn that the NHS middle management started to measure waiting times and develop incentives to bring these down, or even eliminate them. This sounds very reasonable, does it not?

Now ask yourself, how do you measure a waiting time? Say a surgery offers 30 minute slots – you may drop in and wait for a vacant slot, but as the wait may exceed a few hours it is just as well to book a slot some time in the future and come back then. So one way to measure waiting times is to measure the mean time between the call and the appointment. This of course neglects to capture the fact that some patients do no actually mind the wait and indeed may choose an appointment in two weeks time for their own convenience rather than due to a lack of available slots. Lets put that fatal weakness aside for the minute as I have not yet got to the amusing part.

After measurements had been made for some time, and much media attention had been paid to waiting times, the thumb-screws were turned and surgeries were being incentivized to cut down the times, with the assumption they would work longer hours, or perhaps create clever ‘drop-in’ hours each morning or similar.

Pretty soon however, the results started coming in, the waiting times as some surgeries were plummeting! Terrific news! How were they doing it?

Simple: they simply refused to take future appointments. They had told their patients: call each morning, and the first callers will get the slots for that day. This new system meant nobody officially waited more than a day. Brilliant! Of course it is doubtful the patients all felt that way.

How The Crime Went Up When It Went Down

If you work for the police, you will be painfully aware that measuring crime is difficult. And so it is with the measurement of many ‘bad’ things – for example medical misdiagnoses or industrial safety incidents.

Let’s look at workplace safety; while it may be fairly easy to count how many of your staff have been seriously injured at work, it is much harder to record faithfully the less serious safety incidents – or more specifically, the ones that might have been serious, but for reasons of sheer luck, were not. The so-called ‘near-misses’.

Now to the problem. Let us say you are a fork-lift driver in a warehouse and one day, it a moment of inattention, you knock over a tower of heavy crates. Luckily, no-one was around and more luckily, no damage was done. So what do you do? Do you immediately go to the bad-tempered foreman with whom you do not get along and tell him you nearly killed someone and worse, nearly caused him a lot of extra work? Or do you carefully stack the crates again and go home for dinner?

The police suffer a similar predicament. The reporting of a crime is often the last thing someone wants to do, especially if they are the criminal. Now let’s say you are an enterprising young administrator just starting out in the honourable role as a crime analyst at the Met. You want to tackle crime statistics in order to ensure the most efficient allocation of funds to the challenges most deserving thereof. Do bobbies on the beat pay for themselves with proportionately reduced crime? Does the ‘no broken windows‘ policy really work? Does the fear of capital punishment really burn hot in the mind of someone bent on murderous revenge? Such are the important questions you would wish to answer and you have a budget to tackle it. What do you do?

You set out to gather statistics of course, and then to develop those tricky little things, the KPIs.

Now let’s say a few years pass, and after some success, you are promoted a few times and your budget is increased. Yay! You have always wished for more money to get more accurate data! Another few months later and the news editors are aghast with the force. Crime is up! Blame the police – no, blame to left – no blame the right! Blame the media! It’s video games – no, it’s the school system!

No, actually it’s a change in the baseline.  The number of crimes recorded most likely went up because the effort in recording them went up. The crime rate itself may well have gone down.

The opposite can also happen. Say you run a coal mine and you will be given a huge bonus if you can get through the year with a certain level of near misses. Will you really pressure your team to report every little thing? I think not.

So the lesson here is: watch out for a KPI where you want the number to go one way to achieve your longer term aims, but where the number will also depend on measurement effort.

The Profit Myth

Most KPIs are dead dull. The very mention of KPIs will elicit groans and be followed swiftly by a short nap. The ‘volume of sales’ KPI is no different. The issue with the volume KPI is probably made worse by the clear fact that actually thinking about KPIs is a strong sedative. Surely selling more is good? Well, if you can fight through the fog of apathy, and actually think about this for a second, it is easy to see this is often not so.

To see why, it is important to understand price elasticity. It if often true that lowering price will increase sales, so an easy way to achieve a target ‘volume’ (number) of sales is to drop price. That way to can make your volume KPI look good, as well as your revenue KPI, and so long as there is still a positive margin on all the units sold your earnings KPI (aka profit!) will see upside. There can’t possibly be any downside, can there?

Of course, astute financial types can find this fault easily, and perhaps the question of ‘how’ would make a good one for a job interview. It turns out more profit is not always good (seriously!). Whether more profit is really worth taking depends on the ratio of the increased earnings to the increased investment that is needed to make them.

There is another KPI I mentioned earlier the “ROI” or return on investment. When I discussed it I implied that it was only of interest at the GM level, but really the reason only the GM tends to see this KPI is because it is difficult to calculate, and often only the GM has the clout to get it – but it should be considered by all. To me, it is the king of KPIs for a publicly listed company. And it turns out the ROI may actually go down with increasing profit.

If making more widgets requires no further investment, then the maths is easy, but that is rarely true.

The question is this: is it better to have a large ‘average’ business or a smaller one with higher profit margins? It turns out, from an investor’s perspective, that the latter is fundamentally preferable.

The ROI treats a business a bit like a bank account: asking what interest rate does it offer? The business should be run to give the highest interest rate‘(%), not the highest interest ($). It is always possible to get more interest from a bank account – just put more money in the bank.

Translating a Mission Statement into company KPIs

I mentioned above that the ROI is a pretty darn good KPI – so can we use it alone? Of course not. Recall that the KPIs are mere numbers we measure that try to tell us how we are doing against the company mission statement, and while the company mission statement may unashamedly describe vast profits as a goal, this is almost universally not the whole story.

The organism that is the modern company has one particular need besides profit today, and that is profit tomorrow. The ROI does not capture this need, so more KPIs are required, and trickier ones – ones that capture sustainability, morale, innovation and reputation (brand value). This is the turf of the mission statement.

Even though the more cynical of my readers will know the mission is usually ‘make lots of dosh’, and anything beyond that is window dressing, I would venture that the mission statement is the first step to figuring out which KPIs to put first.

Let’s dissect a few examples. In my own 2-minute analysis I decided they fall into four types:

  1. To appeal to employees:
    McGraw Hill:
    We are dedicated to creating a workplace that respects and values people from diverse backgrounds and enables all employees to do their best work. It is an inclusive environment where the unique combination of talents, experiences, and perspectives of each employee makes our business success possible. Respecting the individual means ensuring that the workplace is free of discrimination and harassment. Our commitment to equal employment and diversity is a global one as we serve customers and employ people around the world. We see it as a business imperative that is essential to thriving in a competitive global marketplace.
  2. To appeal to customers (aka the unabashed PR stunt)
    A recent one from BP:
    In all our activities we seek to display some unchanging, fundamental qualities – integrity, honest dealing, treating everyone with respect and dignity, striving for mutual advantage and contributing to human progress.
    I couldn’t leave this one out from Mattel:
    Mattel makes a difference in the global community by effectively serving children in need . Partnering with charitable organizations dedicated to directly serving children, Mattel creates joy through the Mattel Children’s Foundation, product donations, grant making and the work of employee volunteers. We also enrich the lives of Mattel employees by identifying diverse volunteer opportunities and supporting their personal contributions through the matching gifts program.
  3. To appeal to investors. This is usually a description of how they are different or what they will do differently in order to achieve big dosh.
    CVS:
    We will be the easiest pharmacy retailer for customers to use.
    Walt Disney:
    The mission of The Walt Disney Company is to be one of the world’s leading producers and providers of entertainment and information. Using our portfolio of brands to differentiate our content, services and consumer products, we seek to develop the most creative, innovative and profitable entertainment experiences and related products in the world.
  4. For the sake of it – some companies clearly just made one up because they thought they had to, and obviously bought a book on writing mission statements:
    American Standard’s mission is to “Be the best in the eyes of our customers, employees and shareholders.”

Now a great trick when analysing the statement of any politician, and thus any mission statement, is to see if a statement of the opposite is absurd. In other words, if a politician says “I want better schools”, the opposite would be that he or she wants worse schools, which is clearly absurd. Thus the original statement has no real content, it is merely a statement of what everyone would want, including the politician’s competitors. Thus to judge a politician, or a mission statement, it is important to look not at what they say, but at what they say differently from the rest.

Mission statements seem rather prone to falling into the trap of stating the blindingly obvious, and as a result become trivial, defeating the point. Such is the case with American Standard. Of course you want to be the best. And of course it is your customers, employees and shareholders who you want to convince. Well no kidding!

So discounting those, we can see that a good mission statement will focus on difference. If we look at CVS, their mission is to be easy to use. This may seem like a statement of the obvious, but I don’t think it is – because they have identified a strategy they think will get them market share. Now they can design KPIs to measure ease of use. This is the sort of thinking that led to innovations like the ‘drive-thru’ pharmacy.

If we look at Disney, you can go further. “…[To] be one of the world’s leading producers and providers of entertainment…” OK, so they admit being #1 is unrealistic, and if you want to be taken seriously, you need to be realistic. But if you are one of many, how do you shine? “Using our portfolio of brands to differentiate” They realise they can sell a bit of plastic shaped like a mouse for a lot more than anyone else can. There is a hidden nod to the importance of brand protection. So KPIs for market share and brand awareness fall right out. They finish off with “the most creative, innovative and profitable entertainment” Well you can’t blame them for that.

Very rarely, you see a mission statement that not only shows how the company intends to make money, but may also inspire and make pretty decent PR. I like this one from ADM:

To unlock the potential of nature to improve the quality of life.

I have no idea how to get a KPI from that though!

Summary

In this article I have tried to illustrate how measuring a KPI is much like taking the pulse of a body – it’s a one-off health check, yes, but more importantly it can be a longer term measurement of how your interventions are affecting company fitness in the longer term.

I also try to describe some common pitfalls in the use of KPIs and presented four simple tests of their value:

  1. the KPI must correlate to a company’s mission
  2. the KPI must form part of a corrective feedback loop
  3. perverse incentives can be avoided by never considering any single KPI in isolation
  4. repeat the treatment only if the benefits repeat too

I have personally used this checklist (with a few refinements) over the years to some good effect in my own industry (minerals & materials) and though I am confident many of my readers will have more refined methods, I live in hope that at least one idea here will of benefit to you.

Hysteresis Explained

Hysteresis (hiss-ter-ee-sis). Lovely word. But what on earth does it mean?

Hysteresis is one of those typically jargonny words used by scientists that instantly renders the entire sentence if not lecture lost on its audience. Sure, you can look it up on wikipedia, but you may die of boredom before you get to the point, so I am going to explain it here.

——

Hysteresis on the way to school

Let’s go for a walk. Let’s say we are ten years old and we are walking to school. The route is simple. The school is a few hundred yards down the hill on the other side of the road. Now consider the question: at what stage do we cross the road? Immediately? Or do we walk all the way to opposite the school before crossing – or somewhere between?

Assuming there are no ‘official’ crossing points, I bet you cross immediately, then walk down the far side of the road.

How can I make this prediction? Well, I assume that crossing the road requires there to be no traffic, so if there is no traffic as you start the journey, it is a good time to cross. If there is traffic, you just start walking down the road until a gap appears, then you cross. This strategy allows you to cross without losing any time. If your strategy had been to cross at the school there is a real risk you will need to wait, thus losing time. So it turns out the best strategy to avoid any waiting is to cross as soon as you can.

So now picture your walk home. Again, it makes sense to cross early on. The result is that the best route to school is not the same as the best route from school. This is an example of hysteresis – or a ‘path dependent phenomena’.

Hysteresis  everywhere

The dictionary will drone on about magnetism and capacitance and imaginary numbers. A much nicer example is melting and freezing of materials – some substances actually melt and freeze at different temperatures. This shows that the answer to the question: “is X a solid at temperature Y?” actually depends – on the path taken to that temperature. Just like what side of the road you are halfway between home and school will depend on whether you are coming or going.

It seems to me that falling asleep and waking up also bear some of the hallmarks of hysteresis; although they could be considered a simple state change in opposite directions, they feel very different to me – I  seem to drift to sleep, but tend to wake to alertness rather suddenly.

Now think of a golf club in mid swing. As the golfer swings, the head of the club lags behind the shaft. If the golfer where to swing in reverse, the club head would lag in the other direction – thus, you can  tell the direction of movement from a still photograph. We can therefore say the shape of a golf club exhibits hysteresis – and again you see see why it is so-called “path dependent”.

This logic can be taken further still – wetting is not the opposite of drying and likewise heating is rarely the inverse of of cooling. Let’s imagine for example that you want to make a chicken pie warm on the inside and cool on the outside. This is best done by warming the whole pie and then letting it cool a little. The temperature ‘profile’ inside your pie thus depends not only on the recent temperature but has a complex relationship with its more distant temperature history. This particular point is somewhat salient at the moment as we ask the question: is the earth heating up? 

So what?

Good question. I’m not a fan of jargon, and hysteresis is not a word I hope to need to use in my smalltalk. However, you can see that it encapsulates a rather specific and increasingly important concept that is pretty hard to replace with two or three simpler words; thus it passes my test of “words a scientist should understand that most don’t”. Please let me know your own additions to such a list!

 

 

What exactly is temperature? Ever wondered?

We take it for granted. We understand it. It is obvious what temperature is. Cold, warm, hot…obvious.

But how many of us have asked the next question: what is the real difference between a hot stone and a cold one? The answer is interesting and helps us to realise that measuring temperature is much trickier than we tend to suppose.

Over many hundreds of years, many clever people have devised lots of experiments to understand what temperature is, I hope in this article to round up the facts!

Temperature and Energy

For much of history, there were only a few sources of heat – the sun, fire, lava and of course the warmth of living creatures.

People were puzzled by what created it, but it was immediately obvious that it had one consistency – whenever it had the chance, it flowed – put something hot next to something cold, and the heat would flow.

Of course you could argue that it was the ‘cold’ that flowed (the other way), but there were no obvious sources of ‘cold’. While ice was clearly cold, it was not a sustainable ‘source’ of cold the way a fire was.

It was also noted that heat melted things – like fat or butter and that it make some liquids (like molasses) thinner. It could even boil water and make it ‘vanish’. The mechanisms for these were unknown and a source of fascination for early scientists.

Early experimenters noticed that gases would increase in volume upon heating, and that compressing gases would cause them to heat up. They also investigated other sources of heat, like friction, (rubbing your hands together).

It was the work with gas that led to the big breakthrough. Boyle and Hooke, as well as Edme Marriotte, working in the 17th century, realized that the temperature of a gas would increase consistently with pressure, and like-wise, decrease consistently with pressure. This sounds unremarkable, until you note that you can only decrease pressure so much…

Once you have a vacuum (no pressure), you should have ‘no temperature’. Thus their observations implied that there really was a limit to how cold things could get, and predicted it was around -275 Celsius. They were of course unable to cool anything that far simply by expanding it because heat always flows into cold things, so to achieve this you need much better insulation than they had available.

So they had a big clue in the search to understand what temperature is, but still no explanation.

It took until 1738 until another great scientist moved us forward. Daniel Bernoulli realised you could use Newton’s (relatively new) laws to derive Boyle’s temperature-pressure relationship. He basically asked: what if gas was made of a large number of very small billiard balls flying around crashing into everything? What if pressure was just the result of all these collisions? Using this theory he realised, for the first time I think, what temperature truly is.

Source: Wikimedia Commons

It turns out that his model equated temperature with the speed of the billiard balls. A hot gas only differs from a cold gas in the speed of the molecules flying around. Faster molecules crash with more momentum and thus impart more pressure. Squashing the gas into a smaller volume does not give them more speed, but means more collisions each second, so higher pressure.

This is a pretty serious finding. It basically says ‘there is no such thing as temperature’. There is only lots of little balls flying around, and their number and speed dictate the pressure they exert, and there is no ‘temperature’.

If we put a thermometer into the gas, what is it detecting then? Great question.

It turns out that solids are also made of lots of balls, except, instead of being free to fly around, they are trapped in a matrix. When a solid is exposed to a hot gas,  it is bombarded by fast flying atoms. When a solid atom is hit, instead of flying off, it starts to vibrate, like a ball constrained by a network of springs.

So the ‘temperature of a solid is also a measure of speed of motion, but rather than linear speed it’s a measure of the speed of vibration. This makes a lot of sense – as the solid gets hotter, the balls are going literally ‘ballistic’ and eventually have enough speed to break the shackles of the matrix (aka melting).

Source: Wikimedia Commons

So this model of heat as ‘movement’ not only explains how gases exert pressure, but also explains how heat flows (through molecular collisions) and why things melt or vaporise.

More importantly, it shows that temperature is really just a symptom of another, more familiar, sort of energy – movement (or kinetic) energy.

Energy is a whole story of its own, but we can see now how energy and temperature relate – and how we can use energy to make things hot and cold.

Making Things Hot

There are many easy ways to make things hot. Electricity is a very convenient tool for heating – it turns out that when electric current flows, the torrent of electrons cannot help but buffet the atoms in the wire, and as they are not free to fly away, they just vibrate ever faster, ‘heating’ up.

Another way to heat things is with fire. Fire is just a chemical reaction – many types of molecules (like methane, or alcohol) contain a lot of ‘tension’, that is to say, they are like loaded springs just waiting to go off. Other molecules (often oxygen)  hold the ‘key’ to unlocking the spring, and when the springs go off, as you can imagine, it is like a room full of mousetraps and ping-pong balls – and all that motion – means heat.

[youtube=http://www.youtube.com/watch?v=Pmy5fivI_4U]

Making Things Cold

Manipulating energy flows to make things cold is much trickier.

One way it to just put the thing you want to cool in a cold environment – like the north pole. But what if you want to make something colder than its surroundings?

Well there is a way. We learned earlier that gases  get hot when compressed – it turns out they do the opposite when decompressed or ‘vented’. This is the principle that makes the spray from aerosol cans (deodorant, lighter fluid, etc) cold. So how can we use this? First we use a compressor to compress a gas (most any gas will do); in the process it will warm up, then you let it cool down by contacting it with ambient air (through a long thin copper tube, but keeping it compressed), then decompress it again – hey presto, it is cold! Pump this cold gas through another copper tube, inside a box, and it will cool the air in the box – and hey presto, you have a refrigerator.

Measuring Temperature

Before we had thermometers, temperature was generally estimated by touch.

However this is where temperature gets tricky. Because the temperature we feel, when we put our hand on the roof of a car is not really the temperature of the car, it’s really the measure of energy flow (into our hand), which relates to the temperature, but also relates to the conductivity of the car.

This is why hot metal feels hotter than hot wood, why cold metal feels colder than cold wood – the metal, if at a different temperature to your hand, is able to move more heat into you (or take more heat away) faster than wood can. Thus our sense of temperature is easily fooled.

The ‘wind-chill factor’ is another way we are fooled – we generally walk around with cloths on, and even without clothes we have some body hair – therefore, we usually carry a thin layer of air around with us that is nearly the same temperature as we are. This helps us when it is cold and when it is hot – however, when the wind blows it rips this layer up and supplies fresh air to our skin – making us feel the temperature more than usual. Also, because our skin can be damp, there can be evaporative effects which can actually cool you below the air temperature.

Scientists have long known that we cannot trust ourselves to measure temperature, so over the ages many tricks have been developed – can the object boil water? Can it freeze water? A long list of milestone temperatures was developed and essential knowledge for early scientists – until the development of the lowly thermometer.

It was noted that, like gases, solids and liquids also expand upon heating. This makes intuitive sense if you think of hot molecules as violently vibrating – they push one another away, or at least if the charge  (electric charge is what holds these things together) is spread just a little thinner, adjacent molecules will have slightly weaker bonds.

The expansion of liquids may only be very slight, and if you have a big volume of liquid in a cup, the height in the cup will change only very slightly, but if its in a bottle with a narrow neck, the small extra volume makes a bigger difference to the level. This principle is used in a thermometer – it’s just a bottle with a very narrow and long neck. The bigger the volume and the narrower the neck, the more sensitive the thermometer. Of course the glass also expands, so it is important to calibrate the thermometer – put it in ice water, mark the liquid level – then put it in boiling water and mark the new level. Then divide the distance between these marks into 100 divisions – and hey presto! you have a thermometer calibrated to the centi (hundred) grade (aka Celsius) scale. Now you know where that came from!

=================

So that is temperature explained in a nutshell.  If you enjoyed this article you may enjoy my related article on energy.

Energy Explained in One Page

Ok, so we all want to be good to the environment. The first step to doing this, as is often the case – is to understand the main characters in the story – and possibly the biggest character in the story in Energy.

However, energy is such a very vague concept, so where do you go to learn more? Do you have to do a physics course?

I don’t think so, and to test my theory, I have tried to explain energy as briefly as I can in this post.

Energy 101

Energy is what makes the world go round. Literally. Every neuron that sparks in your brain, every electron that fires down a wire, every molecule burning in a fire, carries with it a sort of momentum that it passes on like a baton in a complex relay race. The batons are flooding in all directions all around us and across the universe – they are energy and we have learned how to harness them.

The actual word “Energy” is a much abused term nowadays – because energy is used to represent such a disparate range of phenomena from heat to light to speed to weight, and because it seems to be able to change forms so readily, it is cannon fodder for pseudo-scientific and spiritual interpretation. However, you will be pleased to hear that it actually has a very clear (and consistent) nature.

I like to think of energy being a bit like money – it is a sort of currency that can be traded. It takes on various forms (dollars/pounds/swiss francs) and can be eventually cashed in to achieve something. However, just like money, once spent, it does not vanish. It simply moves on a new chapter in its life and may be reused indefinitely.

§Energy currencies:{1}Matter is energy(see footnotes) {2} Radiation {3} Chemical energy {4} Thermal (heat) energy {5} Compression energy {6} Kinetic (movement) energy {7} Electrical energy

To illustrate the point, let’s follow a ‘unit of energy’ through a visit to planet Earth to see what I mean. The [number] shows every time it changes currency (see the key on the right).

The energy starts off tied up in hydrogen atoms in the sun [1]. Suddenly, due to the immense pressure and heat, the nuclei of several atoms react to form a brand new helium atom, and a burst of radiation[2] is released. The radiation smashes into other nearby atoms heating them up so hot [4] that they glow, sending light [2] off into space. Several minutes pass in silence before the light bursts through the atmosphere and plunges down to the rainforest hitting a leaf. In the leaf the burst of power smashes a molecule of carbon dioxide and helps free the carbon to make food for the plant [3]. The plant may be eaten (giving food ‘Calories’), or may fall to the ground and settle and age for millions of years turning perhaps to coal. That coal may be dug up and burned to give heat [4] in a power station, boiling water to supply compressed steam [5] that may drive a turbine [6] which may be used to generate electricity [7] which we may then use in our homes to heat/light/move/cook or perhaps to recharge our mobile phone [3]. That energy will then be used to transmit microwaves when you make a call [2] which will mostly dissipate into the environment heating it (very) slightly [4]. Eventually the warmed earth radiates [2] this excess of heat off into the void where perhaps it will have another life…

This short story is testament to an enormous quantity of learning by our species, but there are some clear exclusions to be read into the story:

  • Energy fields (auras) or the energy lines in the body that conduct the “chi” (or life force) of Asian medical tradition
  • Energy lines on the Earth (aka Ley lines)
  • Negative or positive energy (as in positive or negative “vibes”)

These energy currencies relate to theories and beliefs that science has been unable to verify and thus they have no known “exchange rate”. Asking how many light bulbs can you power with your Chi is thus a nonsensical question, whereas it would not be for any scientifically supported form of energy. And since energy flows account for all actions in the universe, not being exchangeable would be rather limiting.

Where exactly is Energy kept?

This may sound like s strange question, we know Energy is kept in batteries, petrol tanks and chocolate chip cookies. But the question is, where exactly is it stored in those things?

Energy is stored in several ways:

  • as movement – any mass moving has energy by virtue of the movement, which is called Kinetic Energy
  • as matter – Einstein figured out that matter is just a form of energy, and the exchange rate is amazing – 1g = 90,000,000,000,000,000 joules (from E=mc^2)
  • as tension in force fields

That last one sounds a bit cryptic, but actually most of the energy we use is in this form –  petrol, food, batteries and even a raised hammer all store energy in what are essentially compressed (or stretched springs).

What is a force field? Why on earth did I have to bring that up?

All of space (even the interstellar vacuum) is permeated by force fields. The one we all know best is gravity – we know that if we lift a weight, we have to exert effort and that effort is then stored in that weight and can be recovered later by dropping it on your foot.

Gravity is only one of several force fields known to science. Magnetic fields are very similar – it takes energy to pull a magnet off the fridge , and so it is actually an energy store when kept away from the fridge.

The next force field is that created by electric charge (the electric field). For many years this was though to be a field all on its own, but a chap called Maxwell realised that electric fields and magnetic fields are in some senses two sides of the same coin, so physicists now talk of ‘electromagnetic’ fields. It turns out that electric energy (such as that stored in a capacitor) consists of tensions in this field, much like a raised weight is a tension in a gravity field. Perhaps surprisingly, light (as well as radio waves, microwaves and x-rays) are also energy stored in fluctuations of an energy field.

Much chemical energy is also stored in electric fields – for example, most atoms consist of positively charged nuclei and negatively charged electrons, and the further apart they are kept, the more energy they hold, just liked raised weights. As an electron is allowed to get closer to the nucleus, energy is released (generally as radiation, such as light – thus hot things glow).

The least well known force field is the strong ‘nuclear’ force. This is the forces that holds the subatomic particles (protons) together in the nucleus of atoms. Since the protons are all positively charged, they should want to repel each other, but something is keeping them at bay, and so physicists have inferred this force field must exist. It turns out their theory holds water, because if you can drag these protons a little bit apart, they will suddenly fly off with gusto. The strong nuclear force turns out to be bloody strong, but only works over a tiny distance. It rarely affects us as we rarely store energy with this energy field.

Now we understand force fields we can look at how molecules (petrol, oxygen, chocolate) store energy. All molecules are made of atoms connected to one other via various ‘bonds’ and these bonds are like springs. Different types of molecules have different amount of tension in these bonds – it turns out coal molecules, created millions of years ago with energy from the sun, are crammed full of tense bonds that are dying to re-arrnage to more relaxed configurations, which is exactly what happens when we apply oxygen and the little heat to start the reaction.

The complexity of the tensions in molecules are perhaps the most amazing in nature, as it is their re-arrangements that fuel life as we know it.

What exactly is Heat then?

You may have noticed that I did not include heat as a form of energy store above. But surely hot things are an energy store?

Yes, they are, but heat is actually just a sort of illusion. We use heat as a catch all term to describe the kinetic energy of the molecules and atoms. If you have a bottle of air, the temperature of the air is a direct consequence of the average speed of the molecules of gas jetting around bashing into one another.

As you heat the air, you are actually just increasing the speed of particles. If you compress the air, you may not increase their speed, but you will have more particles in the same volume, which also ‘feels’ hotter.

Solids are a little different – the atoms and molecules in solids do not have the freedom to fly around, so instead, they vibrate. It is like each molecule is constrained by elastic bands pulling in all directions. If the molecule is still, it is cold, but if it is bouncing around like a pinball, then it has kinetic energy, and feels hotter.

You can see from this viewpoint, that to talk of the temperature of an atom, or of a vacuum, is meaningless, because temperature is a macroscopic property of matter. On the other hand, you could technically argue that a flying bullet is red hot because it has so much kinetic energy…

Is Energy Reusable?

We as a species, have learned how to tap into flows of energy to get them to do our bidding. So big question: Will we use it all up?

Scientists have found that energy is pretty much indestructable – it is never “used-up”, it merely flows from one form into another. The problem is thus not that we will run out, but that we might foolishly convert it all into some unusable form.

Electricity is an example of really useful energy – we have machines that convert electricity into almost anything, whereas heat is only useful if you are cold, and light is only useful if you are in the dark.

Engineers also talk about the quality (or grade) of energy. An engineer would always prefer 1 litre of water 70 degrees warmer than room temperature, than 70 litres of water 1 degree warmer, even though these contain roughly the same embodied energy. You can use the hot water to boil an egg, or make tea, or you could mix it with 69 litres of room temperature water to heat it all by 1 degree. It is more flexible.

Unfortunately, most of the machines we use, turn good energy (electricity, petrol, light) into bad energy (usually “low grade heat”).

Why is low grade heat so bad? It turns out we have no decent machine to convert low grade heat into other forms of energy. In fact we cannot technically convert any forms of heat into energy unless we have something cold to hand which we are also willing to warm up; our machines can thus only extract energy by using hot an cold things together. A steam engine relies just as much on the environment that cools and condenses water vapour as it does on the coal its belly. Power stations rely on their cooling towers as much as their furnaces. It turns out that all our heat machines are stuck in this trap.

So, in summary, heat itself is not useful – it is temperature differences that we know how to harness, and the bigger the better.

This picture of energy lets us think differently about how we interact with energy. We have learned a few key facts:

  1. Energy is not destroyed, and cannot be totally used up – this should give us hope
  2. Energy is harnessed to do our dirty work, but tends to end up stuck in some ‘hard to use’ form

So all we need to do to save ourselves is:

  1. Re-use the same energy over and over
  2. by finding some way to extract energy from low grade heat

Alas, this is a harder nut to crack than fission power, so I am not holding my breath. It turns out that there is another annoying universal law that says that every time energy flows, it will somehow become less useful, like water running downhill. This is because energy can only flow one way: from something hot to something cold – thus once something hot and something cold meet and the temperature evens out, you have forever lost the useful energy you had.

It is as if we had a mountain range and were using avalanches to drive our engines. Not only will our mountains get shorter over time but our valleys will fill up too, and soon we will live on a flat plane and our engines will be silent.

The Big Picture

So the useful energy in the universe is being used up. Should we worry?

Yes and no.

Yes, you should worry because locally we are running out of easy sources of energy and will now have to start using sustainable ones. If we do not ramp up fast enough we will have catastrophic shortages.

No, should should no worry that we will run out, because there are sustainable sources – the sun pumps out so much more than we use, it is virtually limitless.

Oh, and yes again – because burning everything is messing up the chemistry of the atmosphere, which is also likely to cause catastrophe. Good news is that the solution to this is the same – most renewable energy sources do not have this unhappy side effect.

Oh, and in the really long term, yes we should worry again. All the energy in the universe will eventually convert to heat, and the heat will probably spread evenly throughout the universe, and even though all the energy will still be present and accounted for, it would be impossible to use and the universe would basically stop. Pretty dismal, but this is what many physicists believe: we all exist in the eddy currents of heat flows as the universe gradually heads for a luke-warm, and dead, equilibrium.

=============

Ok, so it was longer than a page, so sue me. If you liked this article, my first in a series on energy conservation, you might like my series on efficient motoring.

Please leave a comment, I seem to have very clued-up readers and always love know what you think!

=============

§ Footnotes:

[1] Matter is energy according the Einstein and the quantity relates to mass according to E=mc^2 (c is a constant equal to the speed of light).

[2] Radiation (like sunlight) is a flow of energy, and energy content relates the frequency according to E=hf (h is the Planck constant).

[3] Chemical energy – the most complex energy, a mixture of different tensions in nuclear and electromagnetic force fields.

[4] Thermal (heat) energy- this is really just a sneaky form of kinetic energy [6 below] – small particles moving and vibrating fast are sensed by us as heat.

[5] Compression (or tension) energy – while compressed air is again a sneaky form of kinetic energy [6], a compressed spring is different – it’s energy is more like chemical energy and is stored by creating tension in the force fields present in nature (gravity, electromagnetism and nuclear forces).

[6] Kinetic (movement) energy

[7] Electrical energy – this energy, like a compressed spring, is stored as stress in force fields, in this case electromagnetic force-fields.

What exactly is ‘science’?

I used to think science was the practice of the scientific method; i.e. you propose a hypothesis, you develop a test of the hypothesis, execute it and prove the hypothesis.

That worked for me until the end of high school.

At university, I was a true nerd. I read all my textbooks cover to cover (mainly because as I was too shy for girls and too poor for booze). During this time, the definition above started to fail. So much of the science was maths, statistics, observation, pattern recognition, logic and quite a bit of rote learning. Not all of it fitted into my definition of science. I became a fan of a new definition: science is the study of the nature of reality .

But then I did post-grad, and I realised that not much in science is ‘proven’ (I guess this is the point of post grad study). Evolution, for example, is not proven. That the sun revolves around the earth is not ‘proven’. I discovered that the only things that could be proven were ‘ideas’ about ‘other ideas’. Bear with me on this one.

Let us say we define the number system – this is an ‘idea’ or conceptual construction. Within this construction we can ‘prove’ that one and one is two. Because we ‘made’ the system, with rules, then we can make factual and true statements about it. We can’t do this about the real world – we cannot say anything with absolute certainly because we rely on flaky inputs like our own highly fallible perception.

It’s like that old chestnut: how can you be sure you are not living in a giant simulation? Of course you can argue that it is pretty unlikely and I would agree, and right there we have a clue to a better definition of science.

It turns out that much of modern science deals in ‘likelihood’ and ‘probability’ rather than proof and certainty. For example, we can say that the theory of evolution is very likely to be more-or-less right, as there is a lot of corroborating evidence. Science cannot be run like a law court – where the prosecution only need to reach a threshold of reasonable doubt to ‘prove’ someone guilty.

Aside for nerds: Science says you can use logic to prove things absolutely, but logic only works with ideas, and there is a breakdown between ideas and reality, so one can never prove things in reality. So it is thoroughly wrong for a court to say that someone has been proven guilty. The courts use this language as a convenience, to “draw a line under” a case as they have not found a moral way to dole out punishments based on probabilities. Imagine a world in which a murder suspect gets a 5 year sentence because the was a 20% chance he was guilty! Sports referees often operate in this decisive way, perhaps because it saves a lot of arguing!

Anyway, good science cannot just give up and say once there is consensus something passes from theory to fact. This is sloppy. We have to keep our options open – forever.

Think for example of Newton’s Laws of Motion. They are called ‘Laws’ because the scientific community had so much faith in them they passed from theory (or a proposed model) to accepted fact. But they were then found wrong. Strange that we persist in calling them laws!

It took Einstein’s courage (and open mindedness) to try out theories that dispensed with a key plank of the laws – that time was utterly inflexible and completely constant and reliable.

So it is that the canon of scientific knowledge has become a complex web of evidence and theories that attempt to ‘best fit’ the evidence.

Alas, there are still many propositions that many so-called scientists would claim are fact or at least ‘above reproach’. Evolution is attacked (rather pathetically), but the defenders would do well to take care before they call it ‘fact’. It is not fact, it is a superbly good explanation for the evidence, which has yet to fail a test of its predictions. So it is very very likely to be right, but it cannot be said to be fact.

This is not just a point of pedantry (though I am a bit of a pedant) – it is critical to keep this in mind as it is the key to improving our model.

Two great examples of models people forget are still in flux…

1) The big bang theory

2) Quantum theory

I will not go into global warming here though it is tempting. That is one where it doesn’t even matter if it is fact, because game theory tells you that either way, we better stop making CO2 urgently.

Back to the big bang.

I heard on the Skeptic’s Guide podcast today about an NSF questionnaire that quizzed people about whether they believed the universe was started with a massive explosion, and they tried to paint the picture that if you didn’t believe that, then you were ignorant of science. This annoyed me, because the big bang theory is now too often spoken of as if it were fact. Yes, the theory contributes viable explanations for red-shifted pulsars, background radiation, etc, etc, but people are quick to forget that it is an extrapolation relying on a fairly tall pile of suppositions.

I am not saying it is wrong, all I am saying is that it would be crazy to stop exploring other possibilities at this point.

You get a feeling for the sort of doubts you should have from the following thought experiment:

Imagine you are a photon born in the big bang. You have no mass, so you cannot help but travel at ‘light speed’. But being an obedient photon, you obey the contractions in the Lorentz equations to the letter, and time thus cannot pass for you. However, you are minding your own business one day when suddenly you zoom down toward planet earth and head straight into a big radiotelescope. Scientists analyse you and declare that you are background radiation dating from the big bang and that you have been travelling for over 13 billion years (they know this because they can backtrack the expansion of the universe). Only trouble is, that for you, no time has passed, so for you, the universe is still new. Who is right? What about a particle that was travelling at 0.999 x the speed of light since the big bang? For it, the universe is some other intermediate age. So how old is the universe, really?

This reminds us of the fundamental proposition of relatively – time is like a gooey compressible stretchable mess, and so is space, so the distance across the universe may be 13.5 billion light years, or it might be a micron (how it felt to the photon). It all depends on your perspective. It is much like the statement that the sun does not revolve around the earth and that it is the other way around. No! The sun does revolve a round the earth. You can see it clearly does. From our perspective at least.

Now, quantum theory.

Where do I start? String theory? Entanglement? Please.

The study of forces, particles, EM radiation and the like is the most exciting part of science. But being so complex, so mysterious, so weird and counter intuitive, it is super vulnerable to abuse.

Most people have no idea how to judge the merits of quantum theories. Physicists are so deep in there, they have little time (or desire or capability) to explain themselves. They also love the mystique.

I do not want to ingratiate myself with physicists, so I will add that the vast majority have complete integrity. They do want to understand and then share. However, I have been working in the field for long enough to know that there are weaknesses, holes and downright contradictions in the modern theory that are often underplayed. In fact these weaknesses are what make the field so attractive to people like me, but is also a dirty little secret.

The fact is that the three forces (weak nuclear, strong nuclear and magnetic) have not been explained anything like as well as gravity has (by relativity). And don’t get me started on quantum gravity.

———————-

Anyway, thinking about all these issues, I concluded that science was (definition #3) the grand (platonic) model we are building of reality, ever evolving to best fit our observations.

My man, Plato

That works well for me. However, I recently came across a totally different definition for science:

# 4) “Science is a tool to help make the subjective objective.”

OK I paraphrased it to make it more snappy. It was really a discussion about how science was developed to overcome the fallibility of the human mind. Examples of weaknesses it needs to overcome are:

  1. The way our perception is filtered by preconceptions
  2. How we see pattern where there is none
  3. How we select evidence to match our opinion (confirmation bias)
  4. How we  read too much into anecdotal evidence
  5. etc etc.

I could go on. So ‘science’ is the collection of tricks we use to overcome our weaknesses.

I like this definition. We are all going about, and in our heads we are building our model of the world… and its time for an audit!

Gravity explained in 761 words

People seem to be harbouring the impression that there is no good theory of Gravity yet. I asked a few friends – most thought Newton had explained it, but couldn’t explain it themselves. This is rather sad, 80-odd years after a darn good theory was proposed.

Of course there is still some controvery and the odd contradiction with other beloved theories, but the heart of the General Theory of Relativity really does a great job of explaining gravity and it is really wonderfully beautiful, and can be roughly explained without recourse to jargon and equations.

This is a theory that’s just so darn elegant, it looks, smells and tastes right – once you get it. Of course, the ‘taste’ of a theory doesn’t hold much water; for a theory to survive it needs to make testable predictions (this one does) and needs to survive all manner of logical challenges (so-far-so-good for this one too).

This is not a theory that needs to remain the exclusive domain of physicists, so for my own personal development as a scientist and writer, I thought I might try an exercise in explaining what gravity is – according to the general theory of relativity.

For some reason, my wife thinks this is strange behaviour!

============

The story really got started when Einstien realised that someone in an accelerating  spaceship would experience forces indistinguishable from the gravity felt back on Earth. 

He or she could drop things and they would fall to the floor (assuming the spaceship is accellerating upwards)  just as they would fall on earth.

So perhaps that’s all gravity is… some sort of accelleration? Let’s see.

In the spaceship, it’s clear to us that the objects would appear to fall to the floor, but in reality, it is the floor of the spaceship that is rushing up towards the objects – this explains why things fall at the same speed whether heavy or light, matching Galileo’s own test results when he dropped various things, supposedly from the leaning tower of Pisa. It further implies that things will ‘fall’ even if they have no mass at all… such as light beams.

The thought experiment goes thus: Consider if you had a laser-beam shining across the spaceship control room; it would curve slightly downwards, because the light hitting the opposite wall would have been emitted a little time ago, when the spaceship was a little way back, and going a bit slower (remember, its accellerating).

We know the light is not bending, it is just that the source is accellerating, resulting in a curved beam. Imagine a machine-gun spraying bullets across a field – as you swing the gun back and forth the bullets may form curved streams of bullets, but each individual bullet still goes straight.

So Einstein suggested that perhaps light beams will bend in this same way here on earth under a gravitational field. Now Newton’s theory of gravity says light beams may also bend if they have ‘mass’, but the mass of light is a dodgy concept at best (it has inertia but no rest mass, but that’s a whole different blog posting). Anyway, even it it does have mass, it would bend differently from what Einstien predicted. So the race was on to see how much gravity could bend light…

This bending of light prediction was proven by a fellow called Eddington who showed that during a solar eclipse, light from distant stars was indeed bent as it passed near the sun, and by exactly the predicted angle.

Einstein went further though, suggested that light beams on Earth are, just like on the spaceship, really travelling straight, and only appear to bend, and that this can be so if space-time itself is curved. They are going straight, but in curved space.

We know that the shortest distance between two points is a straight line, but if that line is on a curved surface, supposedly straight lines can do strange things – like looping back on themselves. Think of the equator. This model therefore allows things like planets to travel in straight lines around the sun (yes, you read right).

The model has been tested and shown to work, and gives good predictions for planetary motion.

So what can we take home from all this?

Well mainly, if this model is right, we need to let it sink in that gravity may not be a force at all, but an illusion, like the centrifugal ‘force’ you experience when you drive around a corner.

Secondly, it is an open invitation to think about curved space and its marvellous implications!