I have been kept away from writing on this for a few years, due to life – three kids, crazy job, lot’s of travel, yada yada. But that was true before so that’s a bullshit excuse.
The real reason I kept away because I was discouraged.
I had got stuck in my progress of understanding space-time.
But today I got a wake-up call…
I read a book excerpt (on Gizmodo) of ‘Spooky Action at a Distance’ bu George Musser just published last week.
And right there, in plain English, it was: “If you agree that the fundamental level of physics is not local, everything is natural, because these two particles which are far apart from each other explore the same fundamental nonlocal level. For them, time and space don’t matter.” A quote of Micheal Heller.
Damn. People thinking about quantum entanglement decided that if we accept distant entanglement was indeed ‘real’, as we accepted the speed limit on light is ‘real’, that space itself would adapt to avoid a paradox.
Well, in my own work I had decided that exactly the same assumption could be used to explain away the weird interference in the double-slit experiment.
My approach was this:
If we take the Lorentz Transformation to calculate the geometry of the double slit, we see that from the perspective of the single photon, the whole journey is compressed into a single spot. And under such conditions, interference between the ‘possible paths’ is no longer a contradiction.
It also hints tantalizingly that the wave nature of light is a sort of artifact of trying to cross section what is essentially a point event.
I am therefore very grateful to George Musser, because he will allow me to pick up this thread and see where it leads.
I like to start by imagining I am a photon, leaving from, say, my nose, and heading away from earth across the galaxy, eventually terminating somewhere, let’s say on a far-off star – being absorbed as an electron there leaps up to a higher orbital.
From the photon’s own perspective, if that’s a possible perspective to have, time has not progressed – this means it leaves, travels and arrives at once. This means that even as the photon is waiting to leave an atom on my nose, there is a sort of connection with another atom, far away across the galaxy, which is waiting to accept the photon, and then click, all of that distance disappears, it’s all a single point in space, and the photon relocates, somehow without even having to move. My nose and the stars are somehow momentarily at one. Spooky…
This started as a fancy, but I can’t seem to break it!
For example – the approach also seems to have something to say about energy quantisation…
The issue there is that electrons should fall to the atomic nucleus, but don’t – this is because they can’t find an outlet for ‘that particular quantity’ of energy.
Now, with the idea that space and distance are illusory, we can look at every photon emission as paired with an ‘acceptance’ somewhere else. So far we’ve assumed these are unrelated events, but now we see they must be the same event – so it seems natural that these events require some degree of serendipity to occur. Not just any atom can absorb just any photon…
It strikes me we could test this thinking, how can we do it?
Can we send photons that really have no inevitable target? Seems like we could, but the maths is telling me no…
I read some comments on Scientific American today that instantly made my blood boil. Or cavitate at least.
It was an explanation of how tall trees get water right up top. No I never thought about that before either.
Anyhow, anyone who’s drilled a borehole knows you can only suck water up 33 ft before you get a vacuum forming, water boiling and general pumping failure. Hence the need to put a pump at the bottom of a deep borehole.
Now, I had always thought capillary action was what sucked water up plants, handily bypassing this issue, and there, right in the comments, it was asserted that this was a ‘common misconception’…
What, me wrong!? Never!
After the shock, I did what a good scientist is supposed to do, fighting the desire to simply namecall, I watched her darn video.
I remained skeptical. Very skeptical. I again overcame desires to write rude comments in youtube and went a read up on it properly…
Ok, so it turns out that there is some sort of truth to it: indeed some clever people believe water can be ‘sucked’ to the top of tall trees, which does indeed require negative pressure.
So I ask, why won’t the water boil?
Because, they say, it’s ‘meta-stable’. Like super-cooled water, or superheated water, water can supposed go to ‘tension’ without boiling if only you can prevent that initial bubble forming. Simple!
A little more thinking and internal wrangling, and I slowly conceded it just might be. Yes, ok, negative pressure is not really all that radical, it is essentially tension. It’s common in solids, it’s just the idea that water can be ‘tense’ that is difficult to get one’s head around.
So, the process had begun; I started to consider that maybe I was wrong. It’s not pleasant folks, and I am not trying to beat my own drum, I am sure there are plenty of other times when I’ve failed this test, it was just interesting because here I think I passed it…
Anyway, back to the point. Alas, I then read even more deeply, that though I find myself agreeing that water can indeed be under tension, and that sort of does mean negative pressure, I’ve yet to be convinced that ‘wicking’ it not at least involved in tree sustenance. Anyone who has dropped a dry cloth in water knows the water climbs into the fabric.
Furthermore, if there was negative pressure in the tree’s ‘pipes’ why wouldn’t they collapse?
It took deeper digging, but now all my cognitive dissonance is resolved, and I feel just fine by closing my investigation with this makeshift conclusion: that while trees do suck water up (via transpiration and the pull of surface tension in narrow openings) the pressures needed are not too crazy BECAUSE OF THOSE GOOL OL’ WICKING EFFECTS!!
Yup, I have to conclude that the attraction of the fluid for the xylem walls helps ‘keep the water up’ and thus preventing it from pulling too hard on the water above it.
It turns out this is what many others think [great minds for sure], and some [’nuff respect] took the steps of building a pressure probe small enough to poke into a plant’s pipework. What they found supports my newly cherished (but alas already 120-year-old) Cohesion-Tension theory of tree hydration.
In other words, while wicking (capillary action) is not a sole actor, it is there in a critical supporting role. Aaah that’s better, as you can see I wasn’t totally wrong 😉
PS. On the other hand, negative pressures seem to be a new and reproducible fact for us to worry about!
Ok, if you’re tired of being lectured about your sweet tooth or laziness (or both), and just want the straight dope from an engineer, you’ve come to the right place.
You see, over the next couple of minutes, we’ll see that a human body is not much different to, say, a tractor. It’s a tough machine and just like a tractor has very few needs – a little fuel, a little air and a little water.
Ok, ok, we’re a little more complex, but when push comes to shove, we are pretty similar, let me show you…
Food = Fuel
My wife despairs, but she I must point out that she chose to marry an engineer with hardly any niceties. Yes, ok, food is more than just fuel, it’s one of the joys of life yadda-yadda, but, to the engineer in me food is just a handy Continue reading →
I have been reading some pretty strange stuff about Gravity recently. It seems there are some pretty odd folk out there who have taken thinking about physics to a new, possibly unhealthy, level.
Basically, they are crackpots. Well I guess it’s a slippery slope – one day you wonder why the earth is sucking down on you, the next you decide to spend some time on the knotty question. Soon enough you think you’ve got it, it is clearly that the earth is absorbing space which is constantly rushing down around us dragging us with it. Or similar.
So yes, its true, Einstein did not ‘solve’ Gravity, and there is still fame and fortune to be had in thinking about gravity, so this is the example I shall use today.
The trouble with Gravity is that Einstein’s explanation is just so cool! He explained that mass warps space and that the feeling of being pulled is simply a side effect of being in warped space. It sounds so outlandish, but also so simple, that it has clearly encouraged many ‘interesting’ people to have a crack at doing a better job themselves.
So, as a service to all those wannabe physics icons, I offer today a service, in the form of a checklist – what hoops will your new scientific theory have to jump through to get my attention, and that of the so-called ivory tower elite in the scientific community?
Requirement 1: Your theory needs to be well presented
Yes, it may sound elitist to say, but your documentation/website/paper/video should have good grammar. Yes, yes, one should not use the quality of one’s english to judge the quality of one’s theory, and I know prejudice is hard to overcome, but this is not my point. My point is that in order to understand a complicated thing like a physics theory it needs to be unambiguous. It needs to be clear. It needs to use the same jargon the so called ‘elite’ community uses. Invented acronyms, especially those with your own initials in them, are out.
Requirement 2: Your proposal needs to be respectful
Image courtesy of Wikimedia Commons
Again, this is not about making you bow to your superiors in the academic world. Indeed in the case of Gravity, the physics community is one of the most humble out there. While I agree academia is up it’s arse most of the time, this is about convincing the reader that you know your stuff. In order to do that, you need to show that you know ‘their stuff’ too. If you have headings like “Einstein’s Big Mistake” it is a bit like saying to the reader ‘you are all FOOLS!’ and cackling madly. Don’t do it!
Respect also means you need to answer questions ‘properly’. That means clearly, fully, and in the common language of the community. You cannot say “its the responsibility of the community to test your theory”. This is a great way to piss people right off. It is your responsibility to make them want to. This usually means dealing with their doubts head-on, and if you can do that, I promise you they will then want to know more.
Requirement 3: You need to develop credibility
Sorry, as you can see we have yet to consider the actual merit of the theory itself. I wish it were not so, but we are humans first and scientists second. We cannot focus our thoughts on a theory if we doubt the payback. And if you say that aliens came and told you the scientific theory, then people are unlikely to keep listening, unless, perhaps they’re from Hollywood.
But seriously, credibility is the hidden currency of the world, it opens doors, bends ears and gets funds. It takes professionals decades to build and it is really rather naive to waltz into a specialism and expect everyone to drop their tools and listen to you.
That said, the science world is full of incomers, it is not a closed shop as some would you believe. If you follow requirements 1 and 2, and are persistent (and your theory actually holds water) then you are very likely to succeed.
This exhibit is great example of how not to go about promoting your theory. “Monumental Scientific Discovery !” it screams across the top, then the first claim is this:
1) The Acceleration of earth’s Gravity x earth orbit Time (exact lunar year) = the Velocity of Light.(9.80175174 m/s2 x 30,585,600 s = 299,792,458 m/s)
Now this is rather remarkable. Can it really be that you can calculate the speed of light to 9 significant figures from just the earth’s gravitational acceleration and the length of a year? Intuitively I suspect you could (eventually), but then I started to think, well, what if the earth was irregularly shaped? The gravitational constant is actually not all that consistent depending on where you are either. So I checked, then I noticed he said ‘lunar year’. What? Why? What is a lunar year? Then I calculated that the time he used was 354 days, which isn’t even a lunar year. Add to that that he gives the acceleration of gravity on earth to 9-figures despite the fact that nobody knows it that well (like I said it is location dependent). Does he do the same test for other planets? No. Well what if they have no moon!
Requirement 5: The theory needs to be be consistent with well-known observations
Now if your theory has got past requirements 1-4 , well done to you, you will be welcome to join my table any time. Now is when you may need some more help.
Once a theory is consistent with itself, it now needs to agree with what we see around us. It needs to explain apples falling, moons orbiting, light bending and time dilating. This is the hardest part.
It cannot leave any out, or predict something contrary to the facts. It cannot be vague or wishy-washy. It needs the type of certainty we only get from the application of formal logic – and that old chestnut – mathematics.
No you cannot get away without it, there is no substitute for an equation. Equations derived using logic take all the emotion out of a debate. And they set you up perfectly for requirement #5.
Requirement 6: The theory needs to make testable predictions
If your theory has got past the 5 above, very nice job, I hope to meet you one day.
We are all set, we have a hypothesis and we can’t break it. It has been passed to others, some dismiss it, others are not so sure. How do you create consensus?
Simple, make an impressive prediction, and then test that.
Einsteins field equations for example, boldly provide a ‘shape’ of space (spacetime actually) for any given distribution of mass. With that shape in hand you should then be able to predict the path of light beams past stars or galaxies. These equation claimed to replace Newton’s simple inverse square law, but include the effects of time which creates strange effects (like frame dragging), which, importantly could be, and were, tested.
The beauty of these equations, derived via logical inference from how the speed of light seems invariate, and now proven many times, is that they moved physics forward. Rather than asking, ‘what is gravity’, the question is now ‘why does mass warp space’. It’s a better question because answering it will probably have implications far beyond gravity – it will inform cosmology and quantum theory too.
So if you are thinking of sharing with the world at last your immensely important insights, and want to be listened to, please remember my advice when you are famous and put in a good word for me in Stockholm. But please, if, when trying to explain yourself, and are finding it tough, please please consider the possibility that you are just plain wrong…
Jarrod Hart is a practicing scientist, and wrote this to shamelessly enhance his reputation in case he ever needs to peddle you a strange theory.
After seeing this impressive video by Robert Weber about a town near where I used to to live, I decided to give tilt-shift photography a try…
I never tire of that video, and the music grew on me too.
Anyway, so here is the straight dope on tilt-shift…
What you’re supposed to do is take a picture with the lens tilted along a horizontal axis relative to the photographic plate (or CCD for newfangled cameras). This means only a strip across the middle is in focus, and the picture gets gradually more blurry towards the top and bottom.
Now this sounds wasteful of good focus, but is actually something the eye is very used to seeing whenever it looks as a horizontal plane, such as a table-top (one that’s pretty close). Of course, for bigger things, like a football fields, you can usually focus on entirely, especially if you’re seated where the tickets are cheaper. Anyone who has tried to take a photo of small things, or used a microscope, knows this.
So basically, the brain associates this blurring with ‘close’ things, and uses it as one of its tools to guesstimate the size of things.
So you (yes you!) can fool the brain into making things look smaller by adding blur to your photos!
Aside for my science readers…
You can actually create true focal depth blur by using a very wide aperture in your camera; however, even the widest apertures struggle to create much miniaturisation – to get true blur at significant distance, you really need to scale-up the camera proportionally with the distance. To make a warehouse look like a microchip, you really need a camera big enough that its microships are the size of warehouses 🙂
Now, when I was researching this, I was probably thinking what you’re thinking. Mile-wide camera’s are probably a custom job, and even cameras where the lens can be tilted never fail to confuse the nice people at Wal-Mart.
Needless to say, Photoshop (other brands are available!) can add the blur.
Before we dive in, another other top-tip is that air tends to add blue and wash out your colour saturation; you can remove the faraway mountain look by bigging up the red and green saturation. So here was an early attempt of mine:
Here I took a fairly plain photo, added progressive more blur toward the top and bottom, but taking care to mask the tree on the right from the blur. I also greened it up a bit 😉 – I like how it makes the destiny of the golf ball sort of mysterious. Like most of my golf balls.
Of course, touch-ups like the tree are tedious, you really need a photo that has the faraway stuff at the top and the things at the bottom to avoid such issues. Or you can just ignore them and it usually works out fine:
So I blurred the treetop. Most viewers (test subjects in my experiment) did not notice this until I explained what I had done.
Here is one last example; the photo just asked for it…
Enjoy trying it out, and please do add links to your own work – though not to ones you find by googling “tilt-shift photography”, I already did that, and heartily recommend it 😉
Have you ever wondered how the musical notes we use were chosen?
I mean when I was growing up I was learning one thing in music class (do-re-me-fa-so-la-ti-do!) and another in science class (440Hz) and never the twain did meet…
So what gives? I always suspected the musical community were being scientific, but their language was all Greek to me.
Years passed and only rarely did I get the chance to wonder at this question – and meantime my science education was getting the upper hand – I learned how sounds travel through the air and how the ear works – how deep, low notes are the result of compression waves in the air, perhaps a few meters apart, while higher pitched sounds where compression waves much more tightly packed, perhaps millimeters apart. I also learned a note could have any frequency, and so no reason to pick out any ‘special’ frequencies.
However, just recently I realized, in a flash of light, that with an infinite number of notes to choose from, musicians had very deliberately selected only a few to make music with, and I suddenly wanted to know why. Was it arbitrary? Was it the same in different cultures? Why did some notes seem to go together and others seem to clash? And of course, as The Provincial Scientist, I wanted to know if our early musicians had done well in their choices.
As it is now the era of the internet I set about to find out more and thought it was so interesting, it would be a crime not to report what I learned on my blog. So here is what I learned…
In Search of Middle C
The best place to start is probably a vibrating string. The vibrating string is clearly key to pianos, harps, guitars and, of course, the entire ‘string’ section of an orchestra. If you stretch a string and pluck it, you are starting an amazing process – as you pull on the string, you create tension, you literally stretch the string and store energy in the fabric of the string. When you let go, the string shrinks under that tension, which pulls it straight. Alas, when its straight it has picked up some speed and the momentum keeps it going until the string is stretched again – thus the string swings back and forth – and it would continue forever were it not for frictional losses – energy is lost in heating the string, but some is also lost in buffeting the air around the string. The air is pushed then pushed again with each cycle creating compression waves that ripple out into the room – and into our ears. Thus we hear the string.
You can see the vibrating string doing it’s magic here:
You can see in the video that the string swinging back and forth is an awful lot like a wave moving up and down the string! Indeed it is!
The speed at which the wave moves (or string vibrates back and forth) – and thus the note we hear – is determined by a few simple factors – the tension in the string, and the weight of the string and the length of the string. The greater the tension, the greater the force trying to straighten the string, but the greater the weight, the more momentum there is to make it stretch out again.
It is therefore easy to get a wide range of notes from a string, start with a long, heavy wire and only tension it enough to remove all the slack. The note can then be gradually increased by decreased the length or the weight of the wire, or by increasing the tension. These are the tricks used in pianos, guitars and so on.
So far so good. But if you have several strings to tune up, what notes should you pick – from infinitely many – to make music with?
The human ear is an amazing device and can hear notes ranging anywhere from 20 to 20,000 compressions per second (the unit for per second is called Hertz or Hz for short). That is a lot of choice!
As I am sure you guessed, the key is to understand why some notes seem to ‘go together’, and the answer lies back in the vibrating string.
Overtones of Overtones
Firstly, it turns out that when you pluck a string, you actually get more than one note. While the string may swing back and forth in one elegant sweep, it may also get shorter waves, with half or a third or quarter the wavelength hiding in there too. This video shows how one spring can vibrate at several speeds:
Although the video shows the string vibrating at one speed each time, it is actually possible for a string to carry more than one wave at a time (this amazing fact deserves its own blog posting, but we will just accept it for now).
So when a string is plucked, the string ‘finds’ ways to store the energy with vibrations – it finds a few frequencies that carry the energy well, these are called ‘resonant frequencies’, there will be several, but they will all be multiples of one low note. As these higher notes are all multiples of a single low ‘parent’ note, they also have consistent frequency relationships between one another, 3/2, 4/3, 5/4 and many many others.
So clearly, once you have one string, and you want to add a second, you could tune the second string to try to match some of the harmonics of the first string. The best match is to pick a string whose fundamental note is at 2x the frequency of the first string. This string’s fundamental note will match the first string’s 2nd harmonic (also called its first overtone). The second string’s harmonics will also perfectly match up with pre-existing harmonics from the first string. The strings are what is called consonant, they ‘go together’.
Now although the second string will have some frequencies in common with the first string, it turns out that there is an even stronger reason why these notes will go together – it is because when you play several strings at once, you are no longer just playing the strings, the instrument you are playing is the listener’s eardrum. The eardrum will vibrate with a pattern that is some complex combination of the wave-forms coming from the two (or more) strings. When you add two notes together, it is like adding two waves together and you get an interference pattern – the interference may create a nice new sound:
If we add a low note (G1) to a note one octave higher (G2) we get a totally new sound wave.
If, as in this example, one string vibrates at exactly twice the frequency of the other, the two notes will combine to make a handsome looking new waveform, with ‘characteristics’ from both the original waves – but if the frequencies are not a neat ratio, you will get something a bit messy:
This waveform may not repeat, and is unlikely to be consonant with any other notes you may care to add.
Sometimes, when your second string is fairly close in frequency to the first (say 1.1 x the first string’s frequency) then a second phenomenon rears its head, beating. This leads to the creation of entirely new (lower) frequencies that the ear can hear [click here to listen to a sample]. The sum now looks like this:
Beating can sound awful, though of course, the skilled musician can actually use it to create useful effects.
We have seen that once you have selected one note, you have already greatly reduced the ‘infinite’ choice of other notes to use with it – because only some will be consonant. Although the best consonances are at exactly 2x the first frequency, we see that once you have picked two strings, the choice for the third string is more limited. Should you be consonant first the first string or the second? Can you be consonant with both? You can be fairly consonant with both, but only by being 2x and 4x their respective frequencies. If you picked all your strings as multiples of the first string, the ‘gaps’ between the notes would be very big, akin to playing a tune with only every 12th key on a piano. So how can we fill in the gaps?
Well, early thinkers quickly realized that you can’t actually select a perfect set of notes – some combinations will mesh well, others will be just a little bit odd. This realization was probably a bitter pill for early musician-scientists to swallow.
In the end, they came up with many competing options, each designed to maximise the occurrence of good ratios – a good example is the just intonation scale:
Frequency ratio to the first note:
Here, the musician picks two notes that are consonant (C and the next C one octave higher) and then divides the gap into seven steps. Each note is a special ratio of the lower note – we get neat ratios of 5/4, 4/3 and 3/2 showing up which is good, however the ratios between adjacent notes are much less pleasing!
Aside: You will also see that the steps from B to C and E to F are rather small! Now take a look at your piano and note these notes correspond to the white keys on the keyboard that have no black keys between them! This is no coincidence…
Is the ‘just intonation’ division perfect? No, the notes are not all consonant! Remember that with 8 notes in this group, there are 7+6+5+4+3+2+1=28 ratios (or note pairs), and there is no known way to choose them to all be consonant. That is why, although most musical cultures divide their music notes into ‘octaves’ (nicely consonant frequency doublings), there have evolved many different ways to make the smaller divisions.
This scale is based on prioritizing the 3/2 overlap of harmonics and moves three notes very slightly.
It is key to remember there are dozens of ways to do this, depending on what you are trying to optimise – do you want to match the greatest number of harmonics, or some smaller number of stronger harmonics? It may even be that personal taste could come into play.
The Wonderful Piano
Have you ever wondered why you hear someone is playing something in C-minor or F-major? What is the deal there? Well, these are also ‘scales’ – alternative ways to cut up the octave, but from a specific family that lives on the piano.
You see, the piano could also divide the octave into 7 notes, and indeed it was once so divided, but with time musicians realised they could open up more subtlety in their music by adding in more notes. So they decided to add the ‘black notes’, the extra black keys on the keyboard!
So in addition to the 7 notes A,B,C,D,E,F & G, they added C#, D#, F#, G# and A# – they called them ‘half tones’ or accidentals. Of course, there are already two half steps (B-C and E-F) which is why there is no B# or E#. These extra notes gave us 12 smaller steps, and of course choosing 12 consonant notes was even harder than choosing 7!
So, after some hard thinking by scholars including J.S. Bach, a very sensible decision was made – to divide the octave into 12 ‘equal’ steps, which gives us the so-called ‘equal temperament‘, the most popular way to tune a piano. To do this, each note is 21/12 or 1.05946… times higher in frequency than the last one, such that twelve steps will eventually give you a doubling.
However, our musical notation is older than the piano and generally only allows for 7 notes per octave, so how do you write music for 12?
Despite that there are 12 notes, composers have tended to still feel some combinations of 7 notes ‘go together’ better than others and so have persisted to write music using only 7 notes, though of the many hundred’s of ways you could choose the 7 notes, they have selected 12 combinations, the 12 “Major scales“:
The Major Scales (down the left). Each uses only 7 of the 12 notes on the piano keyboard. The shaded vertical lines correspond to the black keys on the piano.
Personally, realising what these scales were was a breakthrough for me. Looking the above map helped me to realize several things:
Many long pieces of music will completely ignore nearly half (5/12ths) of the keys on the piano! To play a tune based on a certain ‘scale’ is sometimes said to be played in that ‘key‘.
The scale of C-Major ignores all the black keys, and is probably the oldest/original scale.
Each scale is displaced 4 ‘steps’ from the previous scale (there is a #1 beneath each #5). This 1st to 5th note relationship turns out to be important.
Aside: Note that there are also the 12 “minor scales“. These scales actually use the same 12 subsets of keys as the major scales, but are ‘shifted’ – they have a different starting point (base note, or ‘tonic‘). This may seem a trivial change, but because the gaps (steps in frequency) are not all evenly sized in these scales, the major and minor scales have their two ‘small’ steps in different places, which is supposed to change the feel or mood of the music (or even the gender!)
The Number 5
The number ‘5’ in the pattern we saw above (5th note) was noticed by musicians long before me, and it shows up in other places too.
For example, we saw in the ‘just intonation’ scale above, that the note G had a frequency ratio of exactly 3/2 with the note C. This means that when you hear both together, every third vibration of the higher note will coincide with every second vibration of the lower note. They are thus highly consonant – and they are 4 steps apart on the stave. This relationship is called the ‘perfect 5th‘. It is again no coincidence that the 5th note of each scale is has a frequenxy eactly 50% higher than the 1st and is the 1st base note (aka tonic) of the next scale. Stepping in 5th’s (ratios of 3/2 in frequency) 12 times takes you through exactly 5 octaves and eventually back to the first scale.
This cycling behavior allowed the invention of a learning tool called the ‘circle of fifths‘, which helps us to understand the relationships between the scales.
Yet another aside: The ‘perfect fifth’ is called perfect if it is truly a ratio of 3/2 – but recall that pianos have their 12 notes ‘evenly spaced’ (a geometric progression) so the ratio of G to C on the C-Major scale will not be exactly 3/2 – it is actually 0.113% off!
But What About Middle-C?
Ok, so we have seen how some notes ‘go together’, and how once you have one note, you have clever ways to find families of notes that compliment that note – but that leaves just one question – how do we pick that first note?
The leading modern convention is use the note A that comes after (above) middle-C, and to set it at 440Hz exactly.
The question is, why?
Well firstly, I shall point out that the 440Hz convention is not fully accepted. For example, anyone who wants to hear, for example, the Gregorian chants the way they originally sounded, would need to use the conventions of the time. Thus there are pockets of musical tradition that do not want to change how their music has always sounded.
However, when it comes to performing a concert with many instruments, it is useful if they all adopt the same standard. The standard is thus sometimes called the concert pitch, and though 440Hz for A is common, this number has been seen to vary from 423Hz to as high as 451Hz.
So the short answer is, there is no really good reason; the choice of 440Hz really just ’emerged’ as a more common option, and when they standardized they rounded it off. While this answer is ultimately trivial, I find a little amusement in the fact that all the music we hear sounds the way it does for no particular reason!
Before I go, there is a video I want you to look at. I think it shows beautifully how 12 different frequency oscillations can exhibit some beautiful harmony (or harmonics!)
It has recently been in the news that some particle may have exceeded the legal speed limit for all things : 299,792,458 metres per second.
Of course, this will probably turn out to be a bad sum somewhere or perhaps waves ganging up, but the whole hubbub has raised my hackles, and here’s why.
Because Albert Einsteinat no time said what they say he said (see here for example). They misunderstand relativity! Things can move at any speed we want, and I will try to explain the fuss now.
So let’s get to it!
First, we have to consider the way space warps when we move.
The problems started when people realised that light always seems to have the same speed, regardless of the speed you were moving when you saw it. This seems to be a contradiction, because surely if you fly into the light ever faster, it will pass you ever faster?
Well the tests were pretty clear, this does not happen. The speed is always c.
For several years, people were unsure why – until they were told by Einstein in 1905. In the meantime, another ponderer of the problem (Lorentz) decided to write down the maths that are required to square the circle.
The so-called Lorentz equations show, unequivocally, that space and/or time need to warp in order for relative speeds of c not to be exceeded, even when two items are going very close to c in opposite directions to one another.
So something needed to give, and it was space and time!
So, newsflash! it was not Einstein that first published on space and time warping. His contribution (along with Henri Poincaré and a few others) was to explain how and why. His special theory showed that because there is no ‘preferred’ frame of reference, a speed limit on light was inevitable. The term ‘relativity’ come from this – basically he said, if everything is relative, nothing can be fixed.
Ok, so we have some nice observations that nothing seems to go faster than the speed of light – and we have a nice maths model that allows it. So why do I persist in saying things can go faster than the speed of light?
Let me show you…
There is a critical difference between ‘going’ faster than light and being ‘seen to be going’ faster than the speed of light, and that is where I am going with this.
So lets take this apart by asking how we actually define speed.
If a particle leaves point a and then gets to point b, we can divide the distance by the time taken and get the mean speed (or velocity to be pedantic).
The issue with relativistic speeds are that the clock cannot be in both point a and point b. So we need to do some fancy footwork with the maths to use one or other of the clocks. So far so good. This method will indeed never get a result > c.
The nature of space forbids it – if the Lorentz transformations that work so well are to be taken at face value, then for something to exceed c by this method of measurement, is much the same as a number exceeding infinity.
So all is still well. Until you ask, what about if the clock is the thing that travelled from a to b?
In this case, the transformations cancel! The faster the movement, the slower time goes for the clock, and you will see its ticks slow down, thus allowing its speed to exceed c.
The clock will cover the distance and appear to have tavelled at c on your own (stationary) clock, but the travelling clock will have ticked fewer times!
If you divide the distance by the time on the travelling clock, you see a speed that perfectly matches what you would expect should no limit apply. Indeed, the energy required to create the movement matches that expected from simple Newtonian mechanics.
The key point here is that while the clock travelled, the reader of the clock did not. If you do choose to travel with the clock, you will see it tick at normal speed, and see the limit apply – but see the rest of the universe magically shrink to make it so.
Some have argued that I am not comparing apples with apples, and that by using an observer in a different frame to the clock I am invalidating the logic.
To those who say that, I have to admit this is not done lightly. I have grown more confident that this inference is valid by considering questions such as the twin paradox over and over.
The twin paradox describes how one twin who travels somewhere at high speed and then returns will age less than his (or her) stationary twin.
Now if we consider a trip to Proxima Centauri (our nearest neighbour) the transformations clearly show that if humans could bear the acceleration required (we can’t) and if we had the means to get to, say, 0.99c for most of the trip, that yes, the round-trip would take over 8 years and no laws would be broken. However the travellers themselves will experience time 7 times slower (7.089 to be precise). Thus they will have aged less than 8 years. So, once they get home and back-calculate their actual personal speed, it will exceed all the live measurements.
This has bothered me endlessly. Although taken for granted in some sci-fi books (the Enders Game saga for example) this clear ‘breakage of the c-limit’ is not discussed openly anywhere.
Still uncertain why people were ignoring this, I read a lot (fun tomes like this one) learned more maths (Riemann rules!) and also started to look at the wider implications of the assertion.
On the one hand, the implications are not dramatic, because instant interstellar communication is still clearly excluded, but that whole issue of needing a 4 years flight to get to Proxima Centauri is just wrong. If we can get closer to c we can indeed go very far into the universe, although our life stories will be strangely punctuated, just as in the Ender books.
But what about the implications for the other big festering boil on the body of theories that is physics today – quantum theory?
Well, if one is bold enough to assert that it is only measurement that is kept below c and not ‘local reality’, then one can allow for infinite speed.
In this scenario, we are saying measurement is simply mapping reality through a sort of hyperbolic lense such that infinity resembles a limit. Modelling space with hyperbolic geometry is really not as unreasonable as all that, I don’t know why we are so hung up on Euclid.
With infinite speed at our disposal, things get really interesting.
We get things like photons arriving at their destination the same tme they leave their source. Crazy of course… but is it?
Have we not heard physicists ask – how is it the photon ‘knows’ which slit is blocked in the famous double slit experiment? It knows because it was spread out in space all the way from it’s source to it’s final point of absorption.
If you hate infinities and want to stick with Lorentz, you can equally argue that, for the photon, going exactly at c, time would stand still. Either way, the photon feels like it is everywhere en route at once.
If the photon is indeed smeared out, it probably can interfere with itself. Furthermore, it is fitting that what we see is a ‘wave’ when we try to ‘measure’ this thing.
A wave pattern is the sort of thing I would expect to see when cross sectioning something spread in time and space.
Please tell me I’m wrong so I can get back to worrying about something useful. No, don’t tell me – show me – please! 😉
Ok, if you don’t know what a starfield simulation is, lets sort you now – look at the video below first.
Ok, for those of you without youtube, think then of the screen savers on early windows PC’s – you may recall the screensaver that makes it look like you are flying through space – this “stars flying by” visual is the thing I am talking about. If you are interested, you can presently download this screensaver here.
Now when this screen saver came out, I’ll admit I was still a bit of a nerd – with a thing for both astronomy and for computers, so I set out to make my own. What I learned along the way initially puzzled me then annoyed me and then made me give up in disgust.
Ok, so before I tell you the ‘big secret’ of what annoyed me so, take a look at this animation:
I think you’ll agree it’s quite good – yes the stars are not perhaps as pretty in their distribution as some of the pictures from the Hubble (see below) but that is quite forgiveable.
Despite the boring uniformity of the stars, I want to draw your attention to the complexity involved in creating this animation. Just ‘guessing’ the paths of the stars by having them start small, somewhere near the middle, and then gradually grow and swing to one of the edges will not do. I tried this, trust me, it looked crap.
No, it turns out the only way to make this look decent is to do the honest thing and create a virtual 3-d world and then place the stars in it, then fly the camera through the space and have the computer figure out the paths for all the stars. Sound tricky? Well it bloody well was in 1995 when I tried it, though I reckon it’s easier now. I used POV-Ray to render hundreds of stills and then tried to create a loop to make an animated gif. It was only like 200-200 pixels and it took days to render but it eventually finished and looked – absolutely nothing like the windows screen saver.
You see, I made the school-boy error of distributing stars ‘realistically’ in my 3-D space – I put them proper distances apart, randomly, and I gave them realistic ‘sizes’ (relative to the inter-star distance). Instantly I had my first problem. The stars were all too small to even be detected by the renderer. Ok, so it turns out stars don’t work like normal things, their apparent size is not due to their actual size but a combination of their brightness and their distance. Fine. So I had to make them far bigger so they could be seen (which is utterly wrong wrong wrong to my purist heart).
Ok, so now I had spots. Did we get that sense of flight? No.
The next issue was that you needed only a few stars to create a ‘busy frame’ (say 20 stars), but most of them were stupendously far away and would stubbornly refuse to budge. The only option was the put absolutely bazillions of stars in the field so that at least a few were nearby enough for you to ‘swoosh’ lavishly past. Of course, to get that many stars, the whole view has to be completely plastered with stars – to the point of being a plain white screen. So I had to do another fudge – I had to create a sort of ‘fog’ that filtered distant light. This meant the viewer would only see nearby stars. Wrong wrong wrong again!!!! I happen to know from my own space travels (on spaceship earth) that we can see rather far without trouble, and thus this fog effect is a terrible hash.
However, I was getting somewhere with the sim. It looked like dots moving now. They did not get any bigger as they got closer, but they did move faster and get brighter, due to the fog. But damn, all the ‘nice’ starfield sims did have the stars actually getting bigger, so now I increased the size of the stars again – so big that the stars were literally only a few dozen diameters apart and hey presto, it now looked good.
Stars are not happy bedfellows!
Now think about that – the stars were only a few dozen diameters apart. The earth is actually about one-hundred sun-diameters from the sun; so what we are talking about it a super dense space, rammed with stars. Wrong wrong wrong. Stars that close tend to get involved in all-out gravity war (see the picture!)
So it occurs to me that the nerdy folks who have a hand in creating those ‘nice looking’ simulations are probably aware of their dirty little crimes. These simulations are not simulations at all, they are but an ‘artist’s renditions‘. Now that is an insult of great proportion to any red-blooded computer programmer. All I can say is, you should have formatted that floppy when you realised what you were doing and moved on with your life. It’s too late now – I know your crimes and will not let you sleep easy tonight.
Ok, I have that out of my system. The question is (it should be burning your lips): what does superfast space flight look like then?
To answer this well, you simply need to put more effort into the simulation – you need to consider the great asymmetries in the star distributions – think how small they really are, then think about their clusters, then spiral arms, then galaxies, then clusters of galaxies, then…
I have referenced this video before and I do it again unashamedly – take a look, because they have already done what I suggest…
I think the makers should get a Nobel prize.
Ah, I am not alone in my nerd-dom. Now you can fly around in a pretty darn impressive virtual universe and see for yourself how the stars really actaully fly past. Happily, the results are not at all like most starfield simulators. You can fly vast distances with the sky literally ‘unmoved’. It is only once you come near a star or star cluster that those few will move, and only when you are moving stupidly fast yourself (like 2 parsecs per second) in a dense part of a galaxy, will you get anything like the old Windows starfield effect. My inner nerd feels justified. You can run the simulator on your own PC, get it at:
A zero calorie energy drink is a flat-out contradiction.
Think about it. What is a calorie? If you don’t know, look it up. Yes, exactly, it is a measure of… energy content! WTF?
What I want to know is this: how come we let big business redefine our language to their own greedy ends? I mean the people who make low-calorie energy drinks know they have no energy in them, so why are they called energy drinks?
I think its because energy is a misunderstood concept and they are taking advantage of this.
Understanding what energy is (and more importantly isn’t) will allow people to more accurately decide things correctly – like whether it’s a good idea to try hike 100 miles across a desert armed only with zero calorie energy drinks.
So for background, please take a look at my article on energy designed for people with too little time to read a whole book, or even a pamplet.
Now, the specific issue here is that people are confusing energy sources with stimulants. Sure, the sugary versions do actually supply some energy, but no more than a can of Coke – but these guys are not charging those absurd prices for sugar – those prices, and claims, are for the drugs. Compounds like caffeine affect our nervous system and interfere with our built-in protection systems, systems that make us feel tired after effort, mechanisms that force us to get the sleep we need in order to rest our muscles and reboot our brains.
The issue here is that the word stimulant is not as easy to sell as ‘energy’, and the English language does allow us to mix up feeling ‘energetic’ with feeling alert and ready for action. The nerdy scientific truth issue here is that tired people actually still actually have plenty of energy (especially if they are prosperous about the middle) it is just their inclination to use that energy that changes.
So next time you feel tired but need to keep going, by all means get a ‘so-called’ energy drink but remember it is mainly just a drug. The next time you hit a wall 20 miles into a marathon, remember to get some real energy.
So is messing with you body’s tiredness systems bad? Not necessarily! We must also resist overreacting and committing another crime – resorting to the naturalistic fallacy that messing with nature is fundamentally a bad idea. I quite like it when medical science messes with natural things like smallpox and malaria for example. Stimulants are not all bad, keeping alert can keep us safe when driving, and used in moderation can actually help us focus through tedious study or exams.
Question: is it possible time flows in little steps?
At some small scale, could it be, that time is simply a ‘symptom’ of a sequence of events, or states, that there is no actual time passage ‘between’ those states?
This scenario has interesting implications – it suggests life is a bit like a movie – a series of pictures on a strip of celluloid, or pages in a book, and like a book, while the story may unfold to you at whatever speed you read it, it does not matter how fast you read the story itself still has its own pace.
This doesn’t mean the book has to be pre-written, it can still unfold with utter unpredictability, the book is unfinished if you like – the important point is that we are stuck experiencing the passage of time at a rate determined internally – by the rate of chemical reactions in our brains. The drum beat of those reactions would feel the same no matter how fast or slow they seems to an outside observer. They could even be paused for a few minutes – we could not tell!
Now physicists studying energy balances of sub-atomic particles have seen that energy often seems to come in little chunks (the ‘quanta in’ quantum), and that can imply that time may also be chunky (maybe Planck time?); alas, time chunking has contradictory implications – contradictory to common sense anyway- like infinite energy flux, not to mention infinite speeds, but hey if you can just get your head around some of the workarounds physicists have dreamed up (quantum tunnelling for example) everything’s all right again. I am personally highly suspicious of workarounds, and that is what I think they are!
Anyway, even if you try to get away from quantum weirdness, you get sucked back in – take for example this geometrical example. Consider the relative positions of three point objects (small particles?) moving freely in space: they could, for an instant, line up perfectly, but if your measurement were infinitely accurate, this could only occur for an infinitely small duration so long as the particles are moving. If you try to explain this by saying space is divided up into chunks (like ‘snap to grid’ in MS Powerpoint) you get into geometrical issues that three points cannot always be integer increments apart (nor even rational increments apart) without breaking the most basic number axioms.
So even if space isn’t chunked, it turns out you can appeal to the uncertainty principle, which handily says you can only measure the position of anything infinitely accurately if you allow its momentum to be anything at all, including infinite – and infinite momentum is exactly what you (temporarily) need if you are bold enough to let time ‘leap’.
So none of these issues with time chunking turn out as solid proofs against the possibility, they just make things more slippery!
Aside: rather than a book, I like to think of our universe as being a bit like a computer program – I like to think about Pac-man when it plays itself in ‘demo mode’ – in demo mode, used to allure people at the arcade, the computer controls both the ghosts and pac-man. In the computer, a sequence of commands is run in the CPU and the speed of the computer (like the reader of the book) controls the rate at which we ‘see’ the ghost-chase on the screen, but this speed is invisible to pac-man himself – yes the ghosts chase faster across the screen, but he can run faster too.
Question: Does a time-increment universe allow time travel?
Well I don’t think we can ‘skip’ events out (we have to experience them all), but if we can go somewhere where events are more or less ‘dense’, maybe we can. We will not feel the difference, we will not get any extra life-span, our cells will age just the same – but if a friend had gone to another place is space-time, where events have bigger gaps, he may have aged at a different rate, and when you meet your friend again one of you will have time travelled forward and the other backward relative to one another.
Is this really possible? Well, yes, I think so – this model ties in very well with relativistic time travel: if you assume events are more spaced out (less dense, with bigger ‘leaps’ between them) in areas with more mass nearby. or when moving vary fast, it maps perfectly.
That’s it for now! Of course, maybe time does not leap, I don’t know, but its something I love to think about! Please let me know your thoughts…