Monthly Archives: October 2008

Imaginary numbers challenge

I have a challenge for people who understand imaginary numbers (if that is indeed possible).

Now, I have seen how imaginary numbers can be useful. Just as negative numbers can.

For example, what is 4-6+9?  7. Easy. But your working memory may well have stored ‘-2’ in its mind’s eye during that calculation. But we cannot have -2 oranges. Or travel -2 metres. Oh sure, you can claim 2 metres backwards is -2 metres. I say its +2 metres, the other way (the norm of the vector).

What about a negative bank balance? I say that’s still platonic, a concept. In the real world it means I should hand you some (positive) bank notes.

We use negative numbers as the “left” to the positive’s “right”. Really they are both positive, just in different directions.

Now for imaginary numbers. I have seen how they allow us to solve engineering problems, how the equations for waves seem to rely on them, how the solution of the differential equations in feedback control loops seem to require them.

But I argue that they are just glorified negative numbers. The logarithmic version of the negative number.

So what is my challenge?

Well, the history of mathematics is intertwined with the history of physics. Maths has made predictions that have subsequently helped us to understand things in the real world. Maths models the world well, such as the motion of the planets, or the forces sufferred by current carrying wires in magnetic fields.

But the question is: is there any basis in reality for imaginary numbers? Or the lesser challenge, negative numbers? 

Is there a real world correlation to “i” ? Or is it a mere placeholding convenience?

Or perhaps positive numbers also lack this correlation?

The speed of time

I want to talk about something very close to my heart.

It has been an obsession for some time now, and I have probably thought about it a little too much, and gone a little too far without checking with some peers. Alas, I don’t know too many physicists down here in Cornwall, and if I wrote papers, they would probably be too disconnected, and not do me any favours. Besides, I suspect the academic world would not really take a shine to someone like me sending in papers without affiliation to any university or research group.

Anyway, my present subject of study (call it a do-it-yourself dissertation) is “the speed of time”. What controls it? How do we measure and sense it? Is there an absolute? That sort of thing.

My thoughts have gone to some interesting places, and some propositions I would like to test provide some interesting implications.

But let me start with my first problem. It relates to how people seem to constantly ignore the implications of special relativity. Take for example, the age of the universe…

Have you ever noticed how people will, one moment, make declarations about the age of the universe, and then in the next agree that time is relative? Isn’t this a contradiction?

Bicycles in BeijingI mean, on the one hand, Katie Melua was informed that her estimate was too low (12 Billion years). She actually recorded a gag version of her song after a respected academic (Simon Singh) chided her for getting it ‘wrong’, and also for calling it a guess, which, he said was an insult to a century of astronomical progress.

Then, if you read a bit about special relativity, it explains that time is relative and can ‘dilate’. For my readers who don’t know what that means, it means that how much time passes depends on how fast you are moving. This theory has some well known implications, such as the “twin paradox” in which a space travelling twin returns from his travels younger than his brother.

Now how are we supposed to square these two well-accepted bricks in the foundations of modern physics? The universe is ‘strictly 13.7 billion years old by current estimates’, but never mind, because time is relative, so if you happened to be travelling at 99% of the speed of light during that time, your clock will only have ticked away ~0.3 billion years (according to the Lorentz Transformation). To make matters worse, light waves (/particles) that set off at the start, travelling at the speed of light of course would have yet to see their watch tick at all, making the universe brand-new as far as they are concerned.

Doesn’t this make a nonsense of the whole concept of age? Or should we say: “for objects in our inertial frame, the universe appears to be 13.7 billions years old”?

That’s pretty wishy-washy – and besides, who is to say that our inertial frame is superior to any other? 

Please someone help me sort this out, as I can think of some pretty serious implications if we can’t.

If you would also do me a favour, pass on this challenge to your nerdiest friends.

 

PS. This one is just the start. I have others, and perhaps like this one, all they need is a reality check!

In Praise of Logarithms

It occurs to me at this time that powers (or logarithms) are an equally justifiable numbering system of their own, indeed they may in fact be more meaningful and representative of ‘reality’ than the linear numbering systems we use so often. What I am referring to is a numbering system where consecutive ‘numbers’ are not simply the last number +1, but the last number multiplied by some factor. So: 1 2 3 4 may be used to represent 1 10 100 1000 in the case of a base 10 log (or exp), or 1 2 4 8 in the case of a base 2 system. (You can see that the numbers are simply obtained by raising the base (2 or 10) to the power of the number in question – so these really are just the logarithmic version of normal numbering)

But wait! The notable thing here is that this system has no apparent negative numbers: -2 -1 0 1 2 3 becomes 0.01 0.1 1 10 100 1000 for base 10.

Aside 1: you will note that addition in this system is ‘altered’. Addition and multiplication are mixed up! 2*3=5 while 6*9=15 (true for all bases!) and on the other hand 2+2=3 (in base 2) while 2+2=2.3010… (in base 10).

Aside 2: Negative numbers? The concept of negative numbers in this context has a strong (and genuine) relationship with imaginary numbers in conventional numbering systems. The only way to obtain negative numbers is to raise your base to the power of an imaginary number: e^(i*pi) = -1 being the famous example of this.

There is however much benefit to had to stop thinking of these numbers as powers but rather think of them as numbers in their own right – a numbering system, extending from -infinity (representing the infinitely small) to +infinity (representing the infinitely large). This is much more in keeping with reality – in which negative numbers don’t really exist! In fact we are so used to them that we have forgotten that they are just as weird as imaginary numbers (they were the imaginary numbers of their time). They, just like imaginary numbers, are so darn useful and sensible that we forget that really don’t have any basis in reality. They are firmly stuck in the platonic world.

So what of reality then? Imagine two points in open space. How far apart are they? A yard? A mile? One cannot say as the space has no reference measure besides the two points. The only definition we might attach is to say the distance is “1”. I.e. we define all distance in that world to be the distance between the two points. If you added more points the distances between them could then be expressed as multiples of the length AB. Using (for example) a base 2 system – because a base two system gives us the case of doubling (or splitting in half) with each increment. So if CD where the AB doubled 10 times then its length would be 10. If EF where AB halved ten times then its length would be -10.

So what’s the point? This ‘numbering’ system allows a better basis for attacking the big cosmological question: What is the nature of space?

Aside 3: “Information” has been shown to be binary (each bit of info halves the unknowns). If you have two boxes, 1 bit will tell you which one has the prize – 4 boxes will need 2 bits – 32 boxes – 5 bits. There is no such concept as negative bits. This numbering system linearises information content.

Aside 4: Which base? Well I am not tied down on this yet. 2 is good and e has a strong case. 10 is probably not as useful as we think (Q: why is 10 ‘special’? A: Its not.)

Please, o blogosphere, dispense thine thoughts!

Marvellous… homeopathy

Homeopaths are definitely on to something.

If you visit one, you will find a caring & honest person. They will look at your problem holistically, and explain how western medicine has been corrupted by money & big pharma, and has been blinkered so successfully it cannot see the big picture. They may explain perhaps that “just like the universe, the body acts as a living open self-organizing system susceptible to entropy yes, but also chaos and new order,” (I quote the tenacious Marty from a homeopathy blog). So hence modern science, which is really about pigeon-holing everything, is not really up to the job of working with the real system.

Now, there are many critics. What exactly is their issue? What on earth do they have against this clearly beneficent endeavour?

Anti-homeopathy rants are two-a-penny on the blogs, and they are very interesting to analyse. They argue that science cannot quantify/comprehend/explain the effect of homeopathy, and therefore, clearly, homeopathy is all poppycock. Fine, no point in engaging with them, they are ‘stuck in their own paradigm’.

But much more interesting is the question: what motivates of these nay-sayers?

Well, the blogosphere has its theories. One that crops up often is the suggestion that significant opponents must be aiming to hold back the good news from the public so they stay trapped in the western way, taking expensive drugs that never clear up their problems completely and therefore leave them financially trapped, but ignorantly grateful. Good for the capitalist systems that run our world, no?

Does this add up? Could all those who denigrate homeopathy really have something to gain from its demise? Another suggestion is that these folks have invested too much in the western system of understanding the universe, which has gone down a blind alley, and they are desperately holding on…

What a lot of bollocks.

The reason people keep popping up who despair at homeopathy is because a certain fraction of the population just happen to grow up with the ability (and desire) to only ever believe things that they fully understand.

Some of those people go on to study science, and they go on to see the marvellous wonder of nature, all the more wondrous because it make sense. It adds up. It is logical.

Science can predict solar eclipses, it can make your satnav work, it can even allow you to talk to someone in New Zealand (they are nice folks, after all).

Even the stuff that sounds like hocus pocus – such as Quantum tunnelling, Heisenberg’s uncertainty principle, quarks, photonic crystals or wave/particle duality – is quite understandable. Yes it may take years of nerdy concentration, but these theories, while complex, are consummately understandable.

Energy is an interesting example. It’s hard to pin down, even scientists can’t give a good account for what it is. A few hundred years back there were plenty of theories, but the application of logic has sorted the wheat from the chaff, and now, although science struggles to define it, they know where it is, how to measure it, how it flows, and even how to use it. But people confuse these proper energy flows (electricity, nerves) with things like acupuncture meridians and leylines and the like. People who think for themselves can quickly spot when things like energy are being used logically, and when it is being used nonsensically.

Now these people, these thinkers, will, if unlucky enough, come across homeopathy. Attractive at first: lots of proponents, lots of jargon, and above all hugely promising. At first things go well. Any really smart student of a new subject will experience the frisson of the unknown, the new, and with good intention will go with the flow, like a foreigner trying to a pick up a new language.

However, as time passes, while usually, with other subjects like language, or quantum physics, it all slowly starts moving into place, homeopathy simply stays at arm’s length. It still ‘sounds’ good, as do other ‘sensible’ subjects, but it never reduces to complete sense – the complete sense where every cause is linked to every effect via an unbroken chain of explainable steps.

So these people, these thinkers-for-themselves, these take-nobody’s-word-for-nothing types, eventually realise it is all a sham.

But sadly, they can also see exactly how others, more trusting, may take it in, hook, line and sinker, so much so they really believe it, feel it & trust it.

They do so because it works.

Yes, that’s what I said. The proof is in the pudding. Those who try it, report that it works.

So what does that smug group of logical smarti-pants have to say about that? Well, they will happily explain that the benefits are real, but are rather due to:

  1. The placebo effect
  2. Regression to the norm
  3. The simple attention of another person
  4. And others you can read about if you are one of those nerds, who wants to be convinced, like me.

The speed of evolution

The theory of evolution is greater than it looks. It is not just clever. It is not just useful. Its biggest value is as a nail in the coffin of some very destructive ideas. Not just the idea that Europeans are superior to Africans, or the idea that humans are superior to animals, but the idea that we all have some divine purpose – and therewith, the whole idea of good and evil.

The fascinating story of how the the tide of evidence has led to the unravelling of religious explanations for the world is, however, not what I wish to ponder here. No, I would like to ponder an area of evolutionary theory that still holds some uncertainty, some mystery.

Relax, I am not trying to ‘break’ or disprove evolution. I am fairly confident it it largely right, but I still think there are questions about it speed.

The problem…

Anyone who has read on the subject understands the pure cunning of natural selection. Basically put, any replicating ‘creature’, that produces slight mutations in its offspring, will produce some offspring that are better than itself – better at competing for resource, better at surviving. Of course many mutations (indeed perhaps most mutations) may produce ‘worse’ offspring, but if the better offspring survive proportionally more, there will be a generational improvement.

This is the same phenomena that allows us to breed better race-horses, beef-cattle or strawberries.

Now, we can see the effects of selection very quickly in a petri dish of bugs, or perhaps in viruses in the human population, but the evolution of large mammals is a slow affair, not easily observed, and it took the discovery of ‘missing links’ to confirm the theory that we had indeed evolved from primate stock.

I personally have not read widely on evolution, I have simply spent lots of time thinking about it, and also spent some brain cells on pedantic calculations and computer simulations.

What comes up, again and again in the simulations is the question of speed.

Speed?

Yes, speed. How fast do we evolve, and have we had enough time to do it?

Aside…

There are two ways to tackle the question of evolutionary speed. One the one hand, you could say: we have only had, say, 5 billion years, to evolve from the basic elements, so we must have evolved fast enough. The calculations must simply be made to fit the data.

Some (not me) have however said, hang on, calculations show that we haven’t had the time to evolve, so the theory must have some massive fault.

The latter argument betrays a misunderstanding of evolution. They assume that as evolutionists claim evolution is ‘true’ and ‘right’, that their models must be right. But if their models suggest we needed 100 billion years to evolve that will prove that evolution is too slow and some other agency is required to square the circle.

However, just because evolution is fairly certain to be right, that doesn’t mean the models are simple, and I hope to give some insight into the challenge that I came across in my own amatuer attempts at the challenge.

Factors that throttle evolutionary change… 

Let’s look at the things that effect the speed of evolution.

  1. the generation gap (time between generations)
  2. the strength of the mutation.
  3. the selection pressures (multiple)
  4. the male/female requirement (and its surprising turbo function)

The first one is obvious – the more generations you get through each year/millennium, the greater potential for evolutionary change.

Mutation strength is more interesting. You could have multiple errors in a DNA sequence, the more errors the stronger the mutation. However, some errors in the DNA may have no particular effect, while other errors could be catastrophic, so that matters too. To keep things simple, lets just focus on the ‘strength’ of the inter-generational change.

If the mutations were very small and subtle, this, I would predict, would slow evolution down. However, if the mutations are too large (remember they are random), they are less likely ‘to be compatible with life’. However, I suspect because they are random, they will come in all shapes and sizes, ranging from untraceably small – to very fatal (resulting in early miscarriage).

However, we have seen in the fossil record evidence that evolution speeds up and slows down. The statistics of mutation ought not to change like that, so there must be more to it.

Selection pressure is the next, and even more interesting, factor.

People have questioned why the world doesn’t have living examples of ‘the missing link’. They must have been ‘viable’ in their own right, so why didn’t they survive?

Some thought on the subject (as well as my own simulations) show that this is not surprising. The speciation event (when one species splits into two) is usually the result of a population becoming subjected to a varying selection pressure (usually geographical). If the population is well mixed, its keeps together, but a mountain range or body or water can reduce the interaction enough to allow the different ‘random walks’ to optimise the two populations for their environments. Allowed to proceed for any length of time, you land up with two separate species, with nothing between. Once separated, they cannot ‘rejoin’ even if they mix once more, so the divergence will continue. Some have argued that ‘spurts’ in evolution augment this speciation process.

So why do these spurts happen? This is most likely selection pressure, but how does it work? Well, putting it simply, the more the environment changes, the more the creatures will need to change to survive. In a static environment, creatures will evolve to suit, but following the law of diminishing returns, once happy, would have no driving force to change any more.

However, you could well argue, that while a quickly changing environment allows faster evolution, it is simply taking its foot off the brake, something else is really controlling the maximum speed of evolution.

What do I mean by maximum speed? Well ask how fast would an environment need to change, such that evolution could not keep up – then you have reached its maximum speed.

There is an example that some folks think is an example of an environment that promoted fast evolution. Theorists have suggested that early man was often the victim of famine, and often needed to move around, which resulted in accelerated evolution. The stronger penalty on the weak, the higher reward for strength and wisdom, meant that potentially positive mutations, that may be lost in stable ‘easy’ environments, were more effective in this environment, thus accelerating change.

There is much written about selection, by people much smarter than myself, so I will not elaborate any more on it. I would simply summarise, that in the course of billions of years, some environments would allow fast evolution, while others would stagnate, and the average speed, while hard to quantify, would not be consistently high enough to be the key to the sort of evolutionary speed we need to gain so many evolved features is so few generations.

So at this stage, I would like to point out the most marvellous thing about evolution, which I think can multiply the effects of natural selection.

The two-sex system and its accelerating effect…

As I grew up, I sometimes wondered why we needed two sexes. Why not just allow all creatures to have offspring, add in a little mutation, and hey presto, it should work. 

This was the form of my very first simulations (some 17 years ago now!). It showed me fairly rapid evolution and increasing fitness, so all looked good. If you set the bifurcation ratio to 2, give each child a random ‘fitness’ and then set the chance of reproduction be proportional to fitness (a very simple algorithm), the population did get fitter. However, when I compared it with the standard model, where you have two sexes, two differences in macro behaviour showed up.

Firstly, you have to add an extra ‘selection’ criteria. It is not just about surviving to reproductive age, it is now about surviving AND finding a mate. And you can forget about monogamy, so a very fit (for sexual selection) male could fertilise several females, at the expense of less fit males. This effect can (in my model) greatly accelerate change. All you need to do to get fast change, is ensure that partners can identify fitness accurately. 

The second interesting effect from having two sexes (and what I found out from my model), is that good genes can spread through the population, which does not happen in the one-sex model. This spread of good genes means that one part of a population could be learning how to deal with sunburn, while another is learning to deal with sickle-cell anemia, and their solutions can be shared by all.

Now we are talking. This, if I have my thinking right, would be a serious turbo-boost for evolution, allowing it to evolve lots of traits in parallel, whereas the single sex model would have to work on one feature at a time – first build an eye, then build a digestion tract, then build a good sense of humour… This buys evolution a lot of time. This makes it far more likely that 5 billion years is enough time to make all the amazing variety we see today.

So I would therefore argue that this makes it more likely we had time to go from the first two-sex replicator to the world of wasps, earwigs and herpes.

That begs my next question: how did the first two-sex replicator come about? I think the computer modelling may be beyond me, but I hold out hope for my children! 

====================================

UPDATE: I have added a follow-up post which addresses the question of ‘epigenetics’, the possibility that the DNA sequence is not the sole databank in our genes.
 

Skeptical society, the logical next step from secular society

Yes, it’s true, I’m one of those science nerds who thinks that a good scientific understanding of the world should underpin government. And education, healthcare, the law, etc.

However, I have set myself up for disappointment, because our society doesn’t work like that. It is simply much easier to use anecdotes to sway opinion, to spin data, and to manipulate with a vast arsenal of marketing tricks.

Politicians, salesmen and journalists all know that the full details (of anything), with all ifs and buts included, will not make a catchy headline or slogan, will not catch the eye or tug the heartstring.

Emotions matter more than facts. People vote with their hearts not their heads.

No amount of simple campaigning for ‘better conduct’ by will ever make a damn of difference, as the causesand incentives remain. To move on, what we need is a society that thinks for itself. A skeptical society.

Science’s image problem; an essay

This was originally posted by me on the Skeptic Forum in February 2006. I wanted to keep a copy, so I have popped it up here with some edits following the advice of the forum readers.

Science’s Image Problem 
Jarrod R. Hart 
January 2006    

Science, technology and the whole idea of modernity has developed an image problem. 

To illustrate this trend lets pick a year some will remember well: 1969. 

Neil Armstrong and Edwin “Buzz” Aldrin have just walked on the moon, microwave ovens have started to appear in kitchens and nuclear power seems to hold the key to unlimited energy. 

Communication has been revolutionised by the satellite, women’s lives have been revolutionised by the contraceptive pill and the quality of life is sky rocketing: labour saving devices such as automatic washing machines, food processors and lawn mowers are finding their way into the homes of the masses. Confidence in science is at an all time high. 

Now it is 2006. In the minds of many the term ‘science’ is associated with things like animal testing, genetic engineering, global warming and nuclear war. People are even starting pay a premium for food made in ways that avoid modern technology (so called ‘organic’ foods). So what changed? 

There are many answers to this question and I am sure many readers will have powerful examples from their own fields of experience; I will however put forward theory that I feel holds water. 

Events 

When Harold Macmillan, the then Prime Minister of the UK was asked what could steer a government off course, he answered “Events, dear boy, events!” And, as I now suggest, a handful key events has been largely responsible for starting the rot. 

Public opinion is a strange beast. It is wildly reactionary and often auto-catalyzes in a frenzy of irrationality. Although its true that amazing faith can develop with little or no evidence (the latest wonder-diet for example!), this is usually born from a strong desire to believe. Far more often, it is much easier to destroy public confidence than to build it. 

Three Mile Island (1979), Bhopal (1984), Chernobyl (1986), and the Exxon Valdez oil spill (1989) all had profound effects on the public psyche. Not only did the dark side of industry rear its ugly head, but also, for the first time, the man on the street began to realise “hey, I have an opinion on this!” The general public did not immediately turn against technology, but rather, they started to ask questions. 

The Media Machine 

I would like to suggest that the rot only took hold when the media sensed this insecurity. In a fair world, an honest, open, questioning attitude is a good thing. But this world is not fair. 

Technology had, until the early 80’s, been presented in a very positive light in the media. Big business had for a long time used the public’s confidence in technology to ease in new products and services. All a marketing team needed to do was describe their product as “modern” – and this immediately implied an innate superiority. For some reason, old was bad and new was good. 

In the 1980’s something changed. People’s level of exposure to the media hit a critical level – just enough to make people think they were ‘well informed’. This new level of exposure meant, for the first time, that people were having news of industrial disasters piped into their sitting rooms. And since the public knew about it, the public would have an opinion about it. But who would decide what that opinion would be? This leads us to the ultimate downfall in the public image of technology, for too often, it would be the media that would decide for us. 

To illustrate, simply ask yourself what makes better reading – “Scientists develop drought resistant crops” or “FRANKENFOOD!” 

In the simple battle for the public’s attention, scaremongering has prevailed and its not surprising at all – its so easy! Science has this nasty habit of dealing with unknowns: questions, hypotheses and statistics. It rarely (if ever) deals in cold hard facts. This makes science a sitting duck. 

The nineties bear this trend out, and issues like the vanishing rain forests, global warming, cancer from cellular phones and genetic engineering all took their toll. 

To most people, something is either good for you or bad for you. Radiation is bad, vitamins are good; bacteria are bad and exercise is good. The media like this simple worldview – it makes for good sensational headlines and ensures that articles aren’t too full of ‘complicated science’. 

The need for shock value naturally leads to half-truths. While any chemist knows “the poison is in the dose”, most people don’t, and the media takes full advantage of this. 

Radiation (sunshine!), just like vitamins, can be good (in moderate doses) and bad (in excessive amounts). Bacteria, exercise, alcohol and almost anything for that matter is usually good and bad depending on how much, when and for whom. As Oscar Wilde said, “The truth is rarely pure and never simple”. 

To make matters worse, once a piece of misinformation is out there, it is hard to stop and even harder to bring anyone to account. 

A good example was the hullabaloo surrounding research by Dr Andrew Wakefield of the Royal Free Hospital in London. In his 1998 paper Dr Wakefield highlighted a “possible” link between the MMR jab (the combined Mumps, Measles and Rubella vaccine given to many children routinely) and Autism. Although it was only suggested as a possibility, needless to say the media had a field day, cleverly leaving out the ifs and buts: for example: “Child Vaccine Linked to Autism” (BBC News, 27 Feb ’98). This simple irresponsible action lead to several years of reduced vaccine take-up, with possibly fatal consequences. 

This type of misinformation is particularly dangerous because is parades as ‘proper science’. The media, by referencing a scientific paper in a reputable journal (The Lancet) are lending themselves credibility, but then the simple act of removing a single word (“possible” in the above case) they have degraded the science and greatly harmed its reputation. 

Statistics: The Media’s WMD 

Society used to simply trust the expertise of authority without question. People suffered from some sort of inferiority complex that made them think that ‘scientists’ would know best. As we have seen the media has eroded this with scaremongering, sensationalising and misreporting. However, they have one more killer tool in the toolbox: Statistics. 

The world is a complicated place. There is far too much information to possibly report it all, so we need to distil all the facts into key elements, ‘salient points’ if you like, that give a fair representation of the whole. In order to do this correctly, science produced the statistical method, a rational system for describing sets of data. It provides ways of letting the human mind grasp the important information held in large lists of numbers. The ‘average’ is a good example a player’s batting average is a faster and easier way to judge him then a long list of all the swings he or she ever took. 

So, statistics are essential to the media, who routinely inform us, sometimes well. However, few people out there realise how easily statistics can be coloured and spun. This problem is compounded by the problem that most people have coping with very large numbers (the same trick the lottery uses to fool people into thinking a lottery ticket is a wise investment). 

Rather than do a poor job of examining this, I refer you to a good analysis on the subject: “Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists” by Joel Best (2001, University of California Press) 

Attacking Science 

Another damaging phenomenon worth noting is the new tendency for the media to attack science directly. Recently, especially in the global warming debate, certain parties (with vested interests) have used the media to accuse science of dealing in uncertainty. 

The very pillars that form the foundation of science, things like theories, scepticism and debate are being held up as evidence that scientists cannot agree on anything. Is the world heating up as the result of human activity? According to some, ‘possibly’ is the best answer that science can offer. 

Scientists are rightly incensed by this slander, but what can they do? It is proving very hard to explain to the masses why this uncertainty is good and right. 

It will be even harder to explain to the people that even when most scientists do agree, they are often later proved wrong, which many will cheerfully accept, changing their position in the light of the new evidence. But this great strength is seen as flip-flopping by the public, another sign of weakness. 

The Future 

In this short essay, I have tried to examine why the reputation of science has been taking a hit in the public’s eye. We have seen how certain terrible events like Chernobyl were associated with science and how the media has misreported on the debates of the day. We have also touched on the trouble statistics cause and the difficulty in selling uncertainty. So what does this all mean? 

Is science doomed? I don’t think so. For even though the scientific community has lost ground in the struggle against the tides of ignorance, there is light at the end of the tunnel. 

Big business will continue to tell us whatever sells products, journalists will continue to write whatever sells papers, politicians will continue to say whatever wins votes; but these truths are not malicious forces bent on the destruction of science, they are simple evolutionary forces in the pool of life. And I think, that just like mankind, science will simply evolve and move on. 

References: 
http://news.bbc.co.uk/1/hi/uk/60510.stm (article at start of MMR scare) 
http://news.bbc.co.uk/1/hi/health/2038135.stm (more recent article summing up MMR scare) 
http://www.amazon.com/gp/product/0520219783/ref=pd_sim_b_3/103-2618564-5123866?%5Fencoding=UTF8&v=glance&n=283155 
(Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists)

Race track pondering…

I have not personally ever raced cars. Of course I fancy my driving skills, and, like most people, am sure I am better than average, but only half of us can be right…

But watching the Shanghai Grand Prix this morning, I had a thought. Now this thought may be well known to race drivers the world over, but in all my years of listening to Murray Walker and his colleagues I never heard them discussing this. So perhaps the thought is highly original and clever. Or perhaps its thoroughly wrong!

So this is it: the racetrack can be divided into two parts, the parts that are engine (acceleration) limited (i.e. where the pedal is to the metal) and parts that are ‘grip’ limited (accelerating/braking or lateral/turning or a combination thereof).

I spent the afternoon playing with the kids, but when I should have been preparing to catch one or other at the end of a death defying flying fox manoeuvre, I found myself trying to think of exceptions to my hypothesis.

What about the approach to a corner, before you start turning, I thought? When driving round town, I commonly coast into corners with my foot off the pedal, and no G-forces to speak of. But then in a race, would you coast, even for a moment?

I think not, you would keep your foot flat until the last second then hit the brakes, with ‘coasting’ only lasting as long as it takes to move your foot 6 inches.

So ‘late braking’ is not just braking late, it’s pushing at the boundary that exists between two parts of the track.

This concept should let driver know when to just put the foot flat (and relax!) because the engine limited boundary is fairly safe – it simply can’t be exceeded, whereas the grip limit, if exceeded, will end badly. 

It should also guide the racetrack designer, they know grip limited sections are more stressful, and that the transitions from one limit to the other are ‘interesting’. 

Now, half way though the movie version of Roald Dahl’s “The Witches”, I realised my hypothesis needed a big tweak. Racetracks also have bumps, debris, oil slicks and sometimes rain. Not to mention, those other pesky cars. These all mean either limit could suddenly apply at almost any time, any place on the track. So the division of the track would never be precise, only a guideline.

However, I restate the hypothesis: one limit should always apply, if it doesn’t, you could be going faster…

Someone please shoot his down..

The scientific method defined (well hypothesised at any rate)

I recently realised that the jury is out on exactly what science and the scientific method are (or should be, at least).

Some would say that science is the endeavour to understand the world, answer the “how” behind the ocean tides, rainbows or seed germination. So the scientific method is any way we might do this. Sounds reasonable to me.

However, some would say that science is the business of ‘facts’ or ‘truth’ and proofs. We do experiments to ‘prove’ our hypothesis. This is the definition I would like to take issue with.

Theories and facts confused…

I get really agitated when I hear people say that evolution is a ‘fact’. Not because I’m a  nutty young earth creationist (I’m not), because no-one has yet furnished a proof. But, you may argue, there’s loads of evidence, its clearly a fact.

But evidence is not the same as proof.

Even if something is 99.999% sure, it is still not sure.

I think the trouble comes because people are never taught that those ‘theorems’ and ‘proofs’ they learned in maths class are not quite the same as the theories and evidence in the scientific method.

So is maths a science? Well, yes, sort of. But while it can deal with real things, like counting sheep, it actually deals with a sort of imaginary world (the so-called Platonic ‘world of ideas’). The whole of maths is a mental construct with no known (‘proven’) basis is reality. But nonsense, you say, of course there are numbers in the real world! Well so there are, but there are no proofs!

Proofs are only possible is a fully ‘understood’ world, and because the world of maths is underpinned by a set of axioms, it is, more or less, ‘understood’. But the real world in which we live is not like that. We don’t understand how the brain works, we don’t know how many dimensions there are, we don’t even know if there is a god.

So does that mean we don’t know anything? The media (and opponents of science) use this uncertainty to undermine science. “You can’t prove there is no God, because there is!” Hey presto, a proof of God.

No, science and the scientific method doesn’t do proofs and facts. So what does it do?

Let’s consider the old chestnut, evolution. People had a book that explained the marvellous spectrum of life, from the caterpillar to the jellyfish. This was good enough for many years. But some clever folks started to question why God would bother to make different tortoises on different islands, and why He would go to all the trouble of putting dinosaur bones in certain rocks and why he would disguise their uranium-lead isotopes to make them look millions of years old.

So a theory was proposed (Darwin’s natural selection) that explained the incredible story of species and, for good measure, predicted that humans are apes, which went down well in the church.

Since then, loads and loads of observations have been made that confirm the theory (with the odd tweak). Its a theory that would have been easy to disprove. If it was wrong, some animals that couldn’t have logically been explained by the theory would have cropped up. But they haven’t.

But all this evidence is not proof. And the lack of a disproof isn’t a proof.

The same is true for all accepted theories. The sun and the moon are thought to cause the tides. If that a fact?

If you ask a scientist, even a good one, he/she may well say yes, its a fact. Because it is so darn likely to be right. Because there is no good alternative theory. Because non-one is disputing it. Because the maths is just so neat. Because the theory can make predictions. All good reasons to accept a theory. But they do not make it fact.

So we do know ‘stuff’, plenty of stuff, facts to all intents and purposes, but not strictly facts in the sense of logical proof.

So what is the scientific method, then?

Science is the system of theories and hypotheses about the nature of reality that have not yet been disproven and which are ranked by the weight of evidence in their favour.

It is like a model of the world that we are ever refining, chucking out wrong theories, refining the ones that work. The scientific method is that refinement process. Well that is my hypothesis. The truth may be altogether different!

Dumbing down?

This is my first ever blog post. Ra-ra and all that, let’s get to the subject matter.

Yes, its going to be one of those repositories for all those thoughts I probably vastly overvalue when I first conceive of them. But as I cannot be objective and they may actually serve some purpose, I might as well pop them on-line.

Topic of today? UK exam scores. Why? I just read some other blog on the subject: http://www.badscience.net/2007/08/calling-all-science-teachers/ and have some feelings on the matter.

I am not particularly qualified to comment on the education system, so I beg of you don’t listen to my ‘opinion’, but rather follow my logic…

Many people have suggested, and most recently in the public eye, Dr Goldacre in his excellent book “bad science”, that exam standards may be dropping in the UK.

I’d like to analyse this statement for the general case (i.e. any population of which a subset write an annual exam in which the questions do not repeat). Let’s try to frame the question of their ease in a less emotive logical statement…

Let’s say we have data that show the pass rate is gradually moving up year-on-year.

This must mean that one or more of the following is true:
i) the population is getting genetically smarter
ii) the population is increasing well prepared by its environment (parents, teachers, peers, the internet, etc.)
iii) the subset of people in doing the test has changed
iv) the questions are becoming better correlated to what people know
v) or last, the test questions are getting gradually ‘easier’ (or the marking more generous)

There may be more, but don’t want the extreme complexity to cloud my (eventual) point.

Now each of these statements is hard to prove without more data – and the only data we seem to have is the test scores (although we do have the tests themselves which may prove useful).

It may well be that people are getting smarter – but we might use some science to tackle that – for example you could argue that evolution cannot work this fast (and I personally doubt that nerdyness is particularly good survival and seduction tool).

But the environment is certainly changing, the subset doing the tests may be drifting, schooling techniques are being constantly refined and the correlation between what’s interesting (celebrities, MMR vaccines) and what’s examined is also hard to pin down.

I would say there is more than enough vagueness to ensure that no-one, no matter how well qualified, could answer the question “are we dumbing down” with any conviction.

However, there is a “but”.

The examiners can set the difficulty of each fresh test to be whatever they want (in theory). They could make it easy and let everyone get A’s, or they could make them so hard that only the brightest “X” percent get an A. Yet what we see, year on year, are slight improvements.

There are (at least) two hypotheses as to what the examiners are doing:
a) they are aiming to make the questions identical in difficulty to the last year, despite the full knowledge that this strategy has, to date, resulted in a gradual trend toward better marks.
b) they are deliberately aiming to get just slightly better results than last year due to some “incentive”

As the examiners for all the subjects are probably a fairly independently minded bunch and as there is no evidence for it, there are good reasons to doubt the latter hypothesis. Occam’s razor would surely favour the former, though we can’t be sure.

So where does that leave us?

We can’t suddenly make the tests harder, thus lowering the number of A grades to what they were years back – that would be unfair, and would mean that future young people will actually have to know more and work harder than their colleagues from the present time to get an A.

Why not simply rank the scores, then place predetermined fractions into each grade? This, incidentally, is what (I believe) was done when I went to school, which was essential as we had several different regional exam boards with different exams, so rankings rather than absolute results were felt more comparable. Isn’t this how IQ tests scores work? This would mean, incidentally, that by definition, exam/IQ scores for a population simply couldn’t increase with time.

Perhaps most attractive is the option to leave things as they are in the UK, and ignore the media circus. After all, what does their opinion matter?