Latest Entries »

Look up into the night sky and you’ll see thousands, if not millions (and with a good telescope, billions) of stars. Now as it turns out, you may also be seeing millions of planets!

This is the conclusion of a study by the California Institute of Technology (CALTECH) adding even more evidence to the idea that planetary systems are actually the galactic norm, rather than a lucky fluke. The team at CALTECH made there calculations after observing the planets orbiting the star Kepler-32, a star system, they believe, is actually representative of most of the stars in our galaxy, the milky way.

“There’s at least 100 billion planets in the galaxy—just our galaxy,” says John Johnson, assistant professor of planetary astronomy at CALTECH and co-author of the study

“It’s a staggering number, if you think about it,” adds Jonathan Swift, a post-doctorate at CALTECH and lead author of the paper. “Basically there’s one of these planets per star.”

The star system serving as the astronomers master copy, the Kepler-32 system, has a series of 5 planets (2 have so far been confirmed by other institutes and independent astronomers) orbiting around a M type star (a red dwarf) a star about half the brightness of our star, the sun. This type of star accounts for nearly three quarters of stars in the milky way. The five planets, which are similar in size to Earth and orbit close to their star, are also typical of the class of planets that the Kepler telescope has so far discovered orbiting other M type dwarfs. This has led CALTECH to their conclusion that most of the stars in our galaxy have at least one orbiting planet, if not many more.

So while this system in particular may turn out to be very common, one thing makes it very special for astronomers and physicists the world over. The orbits of the planets lie in a plane that’s positioned such that Kepler views the system side-on. Due to this rare orientation, each planet blocks Kepler -32′s starlight as it passes between the star and the Kepler telescope. This gives scientists the rare opportunity to study the planets’ characteristics, such as their sizes and orbital periods. This orientation therefore provides an opportunity to study the system in great detail—and because the planets represent the vast majority of planets that are thought to populate the galaxy, the team says, the system also can help astronomers better understand planet formation in general.

One of the fundamental questions regarding the origin of planets is how many of them there are. Like the Caltech group, other teams of astronomers have estimated that there is roughly one planet per star, but this is the first time researchers have made such an estimate by studying M-dwarf systems, the most numerous population of star systems known.

To do that calculation, the Caltech team determined the probability that an M-dwarf system would provide Kepler-32′s edge-on orientation. Combining that probability with the number of planetary systems Kepler is able to detect due to its resolution, the astronomers calculated that there is, on average, one planet for every one of the approximately 100 billion stars in the galaxy. But their analysis only considers planets that are in close orbits around M dwarfs—not the outer planets of an M-dwarf system that Kepler struggles to ‘see’ due to irregular orbits of weak starlight, or those orbiting other kinds of stars, like our own G type star. As a result, they say, their estimate is conservative. In fact, says Swift, a more accurate estimate that includes data from other analyses could lead to an average of two planets per star.

This huge prediction in planet numbers has come in the wake of planets being discovered left, right and centre by the Kepler space telescope and other observatory’s  and their numbers are only likely to increase. And who knows, maybe they’ll find a planet close enough for us to visit someday?…

Alex Davis

References:

http://www.upi.com/Science_News/2013/01/03/Milky-Way-may-have-100-billion-planets/UPI-58681357251056/

http://www.examiner.com/article/study-milky-way-home-to-100-billion-planets?cid=rss

When we’re little (and in some strange cases, into adult hood), the story of Father Christmas, the fat old man adorned in red and white robes, pervades our lives, (hopefully) making us think about our actions due to the threat of being branded a “naughty child” and getting coal as a present instead of that new PlayStation game you really wanted. However, there comes a time in every child’s life where they learn the truth of Christmas. The truth that an old man doesn’t break into you’re house, leaving gifts, but that instead your parents quietly hide presents in the loft until you’ve gone to sleep on Christmas eve.

And for those of you that still believe, sorry, but the truth hurts.

But, as a bit of an annoying child, one thing always puzzled me, if this legendary man DID exist, how would he get around the world, and all its good children, in only one night? would it even be possible?

Well lets start with the children. There are roughly 2 billion under 15’s on earth at any one time (lets assume this is the point you stop believing in farther Christmas and start buying people gifts instead, you cheapskate). However, since St Nick does’t visit children of Muslim, Hindu, Jewish or Buddhist (except maybe in Japan) religions, this reduces the workload for Christmas night to about 33% of the total, around 660 million children, and with a global average fertillity rate of around 2.5 children per woman (and therefore household) this amounts to about 250 million households, assuming there is at least one good child in each.

Now, farther Christmas has circa 31 hours (if we include things like the rotation of the earth and differing time zones) to make his round trip of the world and its homes, this works out as 2240 visits per second. That is to say, St Nick has around 1/2500 th’s of a second to park up on your roof, break into your house, fill your stockings, place your presents, eat any food left for him, get out again and reach the next house.

Assuming these 250 million homes are evenly distributed around the world (which, of course, they wouldn’t be), we’re now talking 0.23 miles per household, a minimum trip length of 131.1 million miles, without diversions around storms, aeroplanes or mountains.

This means our dear old Father Christmas has to be travelling at a speed of around 1175 miles per second (4,226,000 miles per hour) this is about 5500 times the speed of sound. In comparison, the fastest ever man made object is the Helios space probes, which orbit the sun with an average speed of 44 miles per second, your run of the mill reindeer can run at about 0.00416 miles  per second (15 miles per hour).

The payload of the sleigh adds another interesting element. Assuming that each child gets nothing more than a medium sized LEGO set (two kilograms), the sleigh is carrying over 500 thousand tons, not counting Father Christmas himself. While on land, a conventional reindeer can pull around 150 kilogrmas. Even granted that flying reindeer can pull 100 times this, St Nick would need more than 8 or 9, he would need 33,000 of them. This increases the payload even further, adding another 5000 tonnes to the sleigh. This makes it similar in weight to the Seawise Giant, the longest ship ever built, and by many standards, the largest man made, self-propelling object ever.

500,000 tonnes moving at 1175 miles per second is going to produce a lot of air resistance. It would be equivalent to a spacecraft re-entering the earth’s atmosphere 168 times faster than its supposed to. As a result, the reindeer would almost instantly evaporate into a superheated cloud of atoms and molecules.

Not that it matters much, since St Nick, as a result of accelerating from a dead stop to 1175 miles per second in 0.0004 seconds, would be subjected to acceleration forces of 22 million g’s. A 115 kilogram Father Christmas (which seems ludicrously slim) would be pinned to the back of the sleigh by 217 million newtons of force, instantly crushing his bones and organs and reducing him to a quivering blob of pink goo.

Therefore, if Father Christmas did exist, he doesnt now.

Hope you had a good Christmas, and happy new year!

Alex Davis

References:

http://www.indexmundi.com/world/demographics_profile.html

http://worldchristiandatabase.org/wcd/

http://www.un.org/esa/population/

New Eyes on the Sun: A Guide to Satellite Images and Amateur Observation by John Wilkinson

http://www.tribuneindia.com/1999/99jul11/sunday/head3.htm

Autotrophic Humans

(I’m going to have a highly controversial foray into the world of biology today, so do bear with me)

Could autotrophic humans bring an end to the need for chocolate cake, bacon and pasta? Or could there be something better in it for everyone?

In the world of biology, the huge range and diversity of Biota (living things) can be broadly broken down into two main groups, autotrophs and heterotrophs. An autotroph, like plants and many bacteria, are organisms that can produce complex organic compounds, such as carbohydrates, fats, and proteins from simple substances present in its surroundings, generally using energy from light (by photosynthesis) ,and carbon dioxide and water from the air and soil . Heterotrophs, like you and me, on the other hand can’t make these complex molecules, and so need to ingest them, most usually through the eating of many delicious foods. But in today’s world, where space is at a premium and food prices are sky rocketing, surely there are better ways of gaining energy from the world, rather than eating its flora and fauna? What if, like plants, we had evolved differently and could create our own energy and proteins using chloroplasts in our skin? For one, we would most probably all turn green from the amount of chlorophyll, but what about the practical side of it? To start, could we actually survive on nothing but sun light, carbon dioxide and water? And what effects would this have on our environment as land use and populations move and change?

Well, first, let’s have a look at the maths (I’m going to make a point here as many people have brought it up with me; ALL of these numbers are optimal numbers, they don’t take into account bad weather, questionable dress sense or how well aligned with the sun you are, they are simply the best and most favourable estimates for this strictly hypothetical experiment). First things first, let’s have a look at our subject; an average adult British male has a usable surface area of about 1.88m2. This is actually not a lot, but we’ll talk about that later. Now, the average solar power absorbed by the entire earth’s surface is roughly equivalent to 164 petawatts, so the amount absorbed by your average male of 1.88 m2 is in the order of 564 Watts (about half a toasters worth of power). So over the course of the day, with six hours of strong, good quality sunlight, our subject would absorb nearly 12,000,000 joules of energy. This does sound like a lot, but do remember, all this solar energy has to go through the chloroplast first, before becoming usable energy. Chloroplasts are deceivingly inefficient, only turning around 6% of the absorbed light energy into actual, usable energy. For plants and bacteria this is more than enough but for humans this means only 720,000 joules of that glorious sunshine becomes usable energy. This value is far short of our bodies recommended energy intake of 10.5 million joules, in fact it’s just under 7% of what we need.

So that’s a no then to running on sunlight. The problem is that humans just aren’t big enough; we have a lot of mass (and therefore a lot of things that need energy) and not much surface area to go along with it. This, however, is the opposite for plants, who have very little respective mass, yet are given a very large surface area due to all their leafy bits.

But it’s not all doom and gloom. Although 7% doesn’t sound like much it all adds up. Just think 13,800,000 km2 of the earth’s surface is taken up by crops and farmland, 7% of this is nearly 1 million square kilometres, an area about the size of Egypt! This land could be put to use as housing to ease the worlds crippling over population and lack of space, or even as extra public parks and spaces.

But what if we kept this land as farm land? What if we used it to a better purpose? 1,000,000 km 2 of farm land equates to a lot of food. If we take the example of wheat, the world’s staple food and one of the most important grains in production. In the year of 2010 the world wheat production was 651 megatons. If 7% of this was going spare, 45.5 megatons, we would have enough grain to feed nearly half a billion people (assuming average consumption of 100kg per annum).  But then again, there are around 1 billion malnourished people in the world, so would this 45.5 megatons of wheat stretch that far? Let’s look at the human body’s most essential micronutrient, iron. Iron is needed to allow the oxygen we breathe in to bond to the haemoglobin in our blood, without which, we simply wouldn’t be able to live. Iron deficiency is defined as having less than 55% of our recommended daily intake of iron. So if we assume that less than 55% of any food stuff equates to malnourishment, or a food deficiency, our ½ billion people worth’s of food could feed every malnourished child, adult and homeless person in the world (within the realms of statistical error).

To conclude, the idea of individuals running off of sunlight and chloroplast may not be viable, but if we add up the small positives, they make a very large difference

Alex Davis

Sources:

http://www.who.int/en/

http://www.fao.org/

http://en.wikipedia.org/wiki/Chloroplast

http://en.wikipedia.org/wiki/Solar_energy

A calculator

We’re On Google!!

Ahh, the fame

Thats right folks, now you can search for us on Google (and probably other search engines too)!
For best results try searching “Science Joy Life” and we’ll be on the first page of results, woohoo!

Immortal:

adjective

1. not mortal; not liable or subject to death; undying.

2. remembered or celebrated through all time.

3. not liable to perish or decay; imperishable; everlasting.

Immortality; the legendary state that humans have been striving for a very long time. But it is impossible to achieve; nothing organic can live forever. Or can it?

Meet Turritopsis nutricula. This little hydrozoan has achieved what no other multicellular organism (that we know of) has ever achieved before; it is the immortal jellyfish. Well to be precise, the biological immortal jellyfish. This means that theoretically, one of these little creatures could quite happily live for an indefinite period of time, except that most of them are likely to succumb and die off due to predation or disease, especially in the plankton stage.

The immortal jellyfish, in the flesh

 

 

So how does the immortal stinger do it? Well let’s start from the beginning:

The male and female jellyfish release gametes (sex cells) and the eggs become planula larvae that seek out a surface to rest on before becoming a polyp (which is the first form jellyfish take). These hydrozoan polyps are called hydroids. They make up a hydroid colony, with polyps all connected to each other by a tube known as a stolon.

The hydroid colony then buds and releases tiny jellyfish (which are scientifically known as medusoids) that are only a few millimetre across. The tiny jellyfish feed on plankton and grow to a maximum size of about 4.5 millimetres (0.18 in) after 2 to 4 weeks where they take their second form and are now known as ‘medusa’. They are now sexually mature and can reproduce in the usual way, but if the conditions get a little dire, such as starvation, changes in temperature or drops in salinity, they switch up the style and carry out something amazing.

The hydroid colony

 

 

 

 

 

 

 

 

 

 

 

 

An adult will actually revert back into a polyp, by absorbing the tentacles and the jellyfish bell as it reattaches itself to the ground. It then extends those its stolons and begins making a whole new hydroid colony. What’s even better is that they can perform this cool trick at any time during jellyfish development.

The immortal jellyfish does this by going through a process known as cell ‘transdifferentiation’. Cell transdifferentiation is when an already differentiated cell is altered and transformed into a completely new cell. This is one organism that the Grim Reaper doesn’t have an easy time dealing with!

It’s a stinging sensation!

 

 

Now imagine if humans could harness that potential. We could use the process to heal or replace damaged tissue without any adverse effects. Immortality may not as far out of our reach as we had once thought.

Sources:

http://www.realmonstrosities.com/2012/01/immortal-jellyfish.html

http://en.wikipedia.org/wiki/Turritopsis_nutricula

By Myles Scott – The Demotivator

What is Doppler shift and how does it affect the world around us and how using it as an excuse for running that red light may not be such a great idea

If you go and sit out on a not to busy road on a reasonably quiet day, you may notice, that as cars drive past, that the noise they produce seems to differ in pitch depending on whether they’re travelling away or towards you. It seems as if, when approaching you, that the car emits a higher pitched sound, and as it travels away, a lower pitched sound. This strange phenomenon is known as Doppler shift, or the Doppler Effect. It’s caused by waves becoming squashed up in front of a moving object, as the emitted waves struggle to pull away, and becoming spaced out behind the moving object, as the waves struggle to keep up, in both cases changing the pitch of the sound, or frequency of the light, as the wavelengths become shorter and longer.

The basic equation of low speed Doppler shift is:

Where c is the speed of the wave, vr is the speed of the observer, vs is the speed of the emitter and f0 is the frequency of the emitted wave.

This idea was first proposed by the Austrian physicist Christian Doppler in 1842, his paper “”Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels” (On the coloured light of the binary stars and some other stars of the heavens) it details the effects of heavenly movement of the frequency of light received from observable stars. Basically, how the movement of stars in relation to earth affected the colour of the observed light. This technique of observing the difference between the expected frequency and actual frequency of light emitted by stars and galaxy’s has very useful applications. It allows astronomers and cosmologists to determine the rate of expansion in the universe by measuring the “red shift” of galaxy’s, how much the their observed light has been shifted towards the red end of the spectrum due to their movement away from us.

Now this is all well and good, but what does this all mean for us, average Joe, not Johnny the astronomer, and how can we use it to our advantage? Well the thought occurred to me recently, how fast would you have to be going to see a red traffic light as green? It’s an interesting thought. Imagine you’ve just been pulled over for running a red light on a busy box junction, nobody was hurt, but the police still saw you. How fast would you have to be traveling for the light to have appeared green to you, and thus, to get away with the minor traffic offense on a technicality? Well, we can’t use the equation we saw earlier as there are several inherent problems with it. Firstly, it only works for slow speeds, and as is plainly obvious, we’re going to have to be travelling at some speed before things start changing colour, and secondly, it only works accurately for sound waves, or waves that have to travel through a medium, unlike light which can travel though a vacuum. So, we have to use another equation, this time the Relativistic Doppler effect equation, which takes into account the speed of light so it doesn’t affect our calculations as much.

And with some fancy rearranging, this becomes

Now add in some numbers

And we come out with a speed of around the third the speed of light, which is speeding by anyone’s standards. So, you may get away with one traffic offence on a technicality, but there wouldn’t be much hope of getting off on this one.

N.B. see https://journalclubscienceblog.wordpress.com/2012/07/17/what-would-happen-if-we-travelled-near-the-speed-of-light/ for another reason why this isnt a good idea

By Alex Davis

Sources-

http://en.wikipedia.org/wiki/Relativistic_Doppler_effect

http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/reldop2.html

http://en.wikipedia.org/wiki/Frequency

Advancing Physics AS text book

A calculator

A Game of Swords

After reading far too much of the excellent Song of Ice and Fire series, I decided to look a little deeper into the knight’s best friend: A sword! I will not only be looking at the techniques used to create some of history’s most notorious weapons, but I will be exploring the physics behind them, from molecular structures to forces and pressure. This is A Game of Swords!

Anyone who’s anyone (when it comes to weaponry) knows that the most important part of any edged weapon is the quality and design of the blade. There’s no point slashing at your opponents with blunted edges, and you’ll never pierce anything if you have an inferior tip, so how can we go about ensuring that our sword is going to start sharp and stay sharp? The first thing to look at is the material you are using. The most ancient swords, used by the Mayans and Aztecs, were little more than wooden clubs with obsidian chips laid along each edge. Obsidian, made up of Silicon Dioxide with mixed oxides of Magnesium and Iron, is a more a metallic glass than a pure metal. It is this glass like quality that makes it great for sword making, as it is extremely brittle, and will fracture into very sharp pieces. This is all very well, if you happen to live near a long dead volcano, but for the tribes-people of Europe there had to be another way to forge a weapon. The ancient Greeks relied heavily on bronze weaponry, as this alloy of Copper and Tin was strong, sharp and easy to make. Due to the metals used, it was very easy to cast and forge into weaponry. Even late into the iron age, Roman officers carried finely decorated bronze swords into battle. The eponymous Roman sword is the Gladius, which was a very simple double-edged blade with a (relatively long) sharpened point. These swords were primarily designed for underarm stabbing, as in the heat of battle there is very rarely enough space to swing anything larger than a shortsword! The Gladius and its cavalry equivalent, the Spatha, dominated the battlefield for centuries, allowing the Romans the flexibility that they needed, as it only used one hand, the famous rectangular roman (or its rounded sister for mounted combat) shield could be held in the other, offering ample protection for infantry and cavalry alike.

The blades of the common soldiers were actually cast from iron at first, as the early methods of casting it created rather brittle weapons that were prone to breaking. Iron was however much more abundant than copper and tin, and smithies soon started pioneering new techniques to create stronger blades. In East Asia, the metal was often forged from special Tamahagane steel, made from different mixtures of iron sand, which creates an incredibly strong mixture of alloys, perfect for each individual part of the blade. This steel was then folded upon itself repeatedly, creating an edge sharp enough to split a bullet in two (http://www.youtube.com/watch?v=OBFlYwluqMk – skip to about 45-60 seconds to see the slo-mo footage). Steel is so strong because of its crystalline structure, which is created when molten iron is mixed with Oxygen. This is because iron ore contains a lot of carbon atoms. When the iron is cast it will lose some of these carbon atoms, but the more there are, the more brittle the iron becomes. By controlling the amount of oxygen that flows across the steel, the hardness and potential sharpness can be controlled, allowing the smithies to tailor-make their raw forging material. If the steel is more malleable, it can be forged into a stronger weapon, with more interesting curves, but may blunt a lot quicker. In this way a sword can be made from composites of flexible and inflexible steels, with sharp, brittle edges and a flexible body. This is the point at which sword making reaches its zenith.

But now we have our alloys, how do we decide what sort of sword we want? Should our sword be held in one hand, or two? A light sword is good, but would a heavier sword cause a more devastating blow?  The answers to these questions are largely situational, but there may be a physical reason to choose one weapon over another. It all comes down to how much pressure you can apply, and how much pressure your opponent can resist. Pressure is simply the force applied, divided by the area that it is applied to, so greater force equals greater pressure, right? But the force in a sword swing comes mainly from momentum (and therefore the weight) of the sword. So if we want a greater force we’ll need a bigger sword, but a bigger sword means you’ll need to be stronger to actually do anything with it. This is all very well if you’re the knight with the rippling muscles, but what if you’re the poor gangly footsoldier? In that case, would it not be easier to reduce the area that the force is applied to? Especially if your opponent is wearing plate armour and heavy chain-mail, you’ll need something that has a chance of piercing through all those layers (and hopefully your opponent). This is where pointing swords, such as the Rapier come into play. These allow a great deal of pressure to be applied by stabbing forward with the tip of the sword. The smaller and sharper the tip, the greater piercing power your sword has, and the more likely your enemy is to get a bellyful of steel! most of these swords still had two sharpened edges, just in case, but occasionally, a soldier would be so confident of his thrust that his sword would have no edge at all!

Let us suppose we have our stocky knight in his heavy plate armour, with a big, heavy broadsword. On the other side of the field we have the gangly footman, épée in hand, dressed in some cheap chain-mail and an ill-fitting helmet. The knight is a sure bet, right? Wrong! Let us say that the knight’s armour can withstand a direct hit of 2000 pascals of pressure on his breastplate. In anyone’s terms that’s an awful lot of pressure. Now the footman’s épée has a finely crafted tip, 0.5 millimetres across, and his sword weighs about a kilogram. Our footman, quick as an arrow, lunges at our knight with an acceleration of  10 metres per second per second. Newton’s second law states that F=ma, so our footman hits the knight with a force of 10 Newtons. That might not sound like a lot, and in everyday terms it isn’t, but when we feed this value into our pressure equation (bearing in mind the standard length unit is metres), we get a value of 20000 Pascals! Ten times more than the knight’s armour can withstand! Needless to say, the footman would need to give his sword a bit of a clean before he sheaths it again. The outcome might have been different if the knight hadn’t been encumbered with such a heavy broadsword, and indeed, when using a heavy weapon it is always best to be accurate, and better to be well prepared!

Thus we have seen how versatile the humble sword can be, ever the choice of officers and laymen alike, the humble blade served us well for thousands of years. We can see that swords, as well as strategies, can be adapted to suit any situation, and now know that as long as your blacksmith is good enough, you’ll never go unarmed or unprepared!

Harry Saban – The Octave Doctor (Phd Pending)

Sources:

Wikipedia (We’ve all done it, so don’t judge me!) – History of Swordmaking and Steelmaking.

http://www.ehow.co.uk/about_6638014_atomic-structure-steel.html – Atomic Structure of Steel

Armstrong

On the Saturday 25th August, 2012, one of the greatest explorers of modern history tragically passed away. Neil Armstrong.

Known worldwide as the first person to set foot upon an alien world, little general knowledge exists about his early, pre-Apollo life, becoming famous only after his famous moon walk, a fame he hated and publicly shied, becoming a recluse in his later years. However, before all of this he was an accomplished boy scout, a US Navy pilot, a US Air Force test pilot and, for a short period, a university professor.

Before becoming an astronaut, Armstrong was a United States Navy officer and served in the Korean War aboard the USS Essex as an armed recon pilot where, on one sortie, his plane was severely damaged by enemy ground fire, causing him to lose 3ft of his planes right wing. However, against all the odds, Armstrong managed to limp home in his damaged craft and eject into friendly territory.. After the war Armstrong returned to university graduating from Purdue University with a BSc and completed graduate studies at the University of Southern California, gaining his MSci in aerospace engineering. After graduating he served as a test pilot at the National Advisory Committee for Aeronautics High-Speed Flight Station, based at Edwards air force base. Here Armstrong flew several famous craft, including the Bell X-1B and the North American X-15, showing massive potential in both engineering and as a pilot.

Armstrong’s first step towards becoming an astronaut occurred when he was selected for the US air forces Man in Space Soonest programme, a very imaginatively named enterprise to place a man in space before those pesky Russians. In November 1960, Armstrong was chosen as part of the pilot consultant group for the Boeing X-20 Dyna-Soar, a military space plane, and on March 15, 1962, he was named as one of six pilot-engineers who would fly the space plane when it got off the design board.

In the months after the announcement that applications were being sought for a second group of NASA astronauts, Armstrong became more and more excited about the prospects of both the Apollo program, and of investigating a new aeronautical environment. Armstrong’s astronaut application arrived about a week past the June 1, 1962, deadline. Luckily Dick Day, with whom Armstrong had worked closely at Edwards air force base, saw the late arrival of the application and slipped it into the pile before anyone noticed.

On September 13th 1962, Armstrong got the call asking him if he wished to join NASA’s Astronaut corps as part of what was known as the ‘New line’. He jumped at the opportunity, and the rest they say, is history. Neil Armstrong went on to become one of the most famous NASA astronauts in history, becoming the world’s first civilian astronaut, performing the world’s first manned docking of two piloted spacecraft, and of course, being the first man to walk upon the moon.

 

By Alex Davis

Sources:

http://www.nasa.gov/missions/research/neil_armstrong.html

http://en.wikipedia.org/wiki/Neil_Armstrong

http://www.guardian.co.uk/science/2009/jul/09/apollo-astronauts-walking-moon

The Higgs Boson was named after an Edinburgh physicist, Peter Higgs. It is often thought that the Higgs boson forms an overwhelming majority of the composition of the answer to the question of how matter has gained mass. To those of us in agreement with the popular theory of the Big Bang as the origin to the universe, it is thought that shortly after the beginning of the cosmos, mass was inexistent. Indeed, nor were atoms and elements, but that is a different story. A field known as the Higgs field is thought to be the reason for the existence of mass – particles interacting with this field gain mass. This came about due to the decrease in the temperature of the universe (below a certain threshold value), and the amount of mass is directly proportional to the strength of the interaction between the particles.

In the 1970s, it had come to the attention of the global physics community that two of the fundamental forces were actually very closely related. The thing is, the proposed explanation for a unified force, namely the electroweak force, required that a field exist, whose constituent particles carry no mass. Unfortunately, it is known now that this is untrue, and so Higgs and his colleagues set about finding a solution.

Ian Sample has conveniently created an analogy using, rather imaginatively, ping pong balls, food trays, and brown sugar. Basically, as the universe was formed, a field known as the Higgs field came into existence. Many particles interacted with this field, and the more they interacted, the heavier they became, and no longer moved at the speed of light. The Higgs boson is analogous to a grain of sugar, i.e. the constituent of the Higgs field. As you may know, just like light itself, which has dual properties of wave and particle, so may the boson. The question scientists endeavour to answer is how a particle whose properties are yet to be ascertained, could be the reason behind the existence of mass. Sample conjectures that amongst other products from a collision between protons travelling at 0.999999c, Higgs was one of them. The problem is, unlike most other particles, such as photons, quarks, electrons etc., the Higgs boson decays very quickly, and as such is very difficult to observe. The standard model of the universe, developed in the 1970s, which is used today, has met unprecedented success over the past decades in having aspects of it being proven to exist, aside from links to gravity. However, the fundamental tool that is needed was the proof of the existence of the Higgs boson.

The 4 fundamental forces of nature

Why is it called the boson? Well, earlier I mentioned the electroweak force, a force unifying electromagnetism and the weak nuclear force from the standard model, which in itself contains the four fundamental forces – gravity, electromagnetism, and the strong and weak nuclear forces. Scientists propose that each of these forces has a carrier particle, collectively named the boson, which interacts with matter. For example, the electromagnetic force has the photon as its boson, which carries the electromagnetic force with it, and transfers it to matter. Bosons are believed to be able to snap back in and out of existence in an instant and also be ‘entangled’ with other bosons around them. You see, this boson is not only a ‘fundamental force carrier’, it is also a term used for force carriers of various natures and designations. As a conveniently relevant example, I can explain the significance of the photon and bosons called the W and Z particles.

Going back to one of the constituents of the electroweak force, the weak force, its force carrying particles, as the photon is for the electromagnetic force, are two bosons named W and Z, discovered in the 1980s. Unlike the photon, these do have mass. A possible analogy for this next part is to think of these force carrying bosons as balls that are exchanged as particles exchange these force carriers to observe the weak force. Heavier balls have a lower throwing range, and similarly, so do heavier force carriers – this was known. But what gives these force carriers mass, which affects their behaviour? The Higgs theory. This is what scientists think could account for this fundamental difference between photons and the  W and Z bosons, and, by extrapolation, for all other force carriers with nonzero masses. It has been estimated that 96% of the universe is invisible, made up of dark matter and physics that have not yet travelled within our grasp. The Standard Model only accounts for the 4% of the universe which we know so well, hence why we know it can never be a complete, unifying theory. For this, we need to be able to build on the concept, much like Einstein built on his theory of Special Relativity in order to account for gravity – which, incidentally, we are able to conclusively explain very little of. So in reality, the quest for the single Higgs particle (in this specific context) includes determining whether it is a Standard or Non-Standard Higgs particle.

The Standard Model Higgs particle would, if confirmed to be true, only one of numerous different types of Higgs particles. In order to gain an insight into the sheer complexity of the process and the need for excruciatingly thorough and systematic procedural protocols, consider the following. One of the possible ways a Higgs particle can decay, as it will within an instant of being detected, is by emitting two photons, which can be detected. However, there are so many other two-photon events that occur, which by themselves have had countless statistical analyses carried out on them in order to determine various values, some simple ones being the percentage of decay events leading to two photons being produced. Not only this, any discovery can only be (tentatively) validated if the confidence levels from data have a discrepancy of less than one in a million.

I hope that my chaotic and quick run through the world of Higgs has, if not invoked interest, at least informed you in an understandable manner. For those who are interested, the analysis of the Higgs boson’s (possible) discovery is scheduled to be completed by the end of 2012, for which time we should not only be hoping to know whether the Higgs particle that had been discovered is Standard or Non-Standard, but also whether it exists at all. Despite the disappointment that will no doubt be experienced by the disproval of the Higgs boson’s existence, each of the three possible outcomes will lead to progress, either building on current proven theories, adding flesh to the bones of hypothetical theories, or starting afresh in order to encourage the development of entirely new concepts altogether, which may or may not be more effective than building on our current ones.

Sources:

Bharat

Made for my Nuffield Bursary work placement.

Sorry for not posting for the last couple of weeks but things got rather hectic at work. Good news is, we’ve got some really interesting results, and my supervisor will be presenting them at a conference in Germany in October! If anything gets found then I might be a co-author on the paper. It’d be nice to be published before I got into university! If you guys want a special blog post all about the work I’ve done then let me know and I’ll put something together.

– Harry Saban – The Octave Doctor (Phd pending)