Long Christmas Title Created using Figma

Find the perfect gift at our Christmas shop now
UK orders before 15th December, overseas before 4th December

Right Trees Created using Figma

Edit Where Edit's Due: The Rise of the Replicates

Monday, 8 June 2015

I’ve always been loathed to discard any writing, but it is true that the better end result is arrived at by brutal editing,

“So the writer who breeds more words than he needs, is making a chore for the reader who reads.” ― Dr. Seuss

“Kill your darlings, kill your darlings, even when it breaks your egocentric little scribbler’s heart, kill your darlings.” ― Stephen King

“Put down everything that comes into your head and then you’re a writer. But an author is one who can judge his own stuff’s worth, without pity, and destroy most of it.” ― Colette

I’ve been rewriting the Dissent of Man’s Introductory Chapter 1 and had included a section that describes the scientific method in simple terms, but hopefully not patronizingly. I’m hoping that by chopping out the following, it will be all the better forgoing it. The section sits well as a standalone piece, albeit still a rough draft, but rather than throw it away completely, I offer it here to see what you think.

As ever, please help get this book fully funded by pledging, upgrading your existing pledges, gifting pledges and telling others. Thank you.

_____________________________________________________________________________


This isn’t about persuading anyone to think differently. It is about broadening our understanding, so the logic I’m using is common-sense reasoning of the kind that we need everyday, which also happens to follow the same process as scientific reasoning. We construct our hypothesis, design an experiment to test it, collect empirical data and analyse it, drawing conclusions from those results. We usually then carry out an action based on our findings. You’ve glazed over. Some examples will help. They’re a bit banal, but the underlying ideas are important.

So, say, when you’re with a friend and they’re checking Snapchat or Facebook on their mobile, and they suddenly say, “He’s well fit!”. Or, at a car boot or yard sale, you’re looking for a bargain, say an ornament. Or you’re crossing the road, using your eyes and ears to check for oncoming vehicles, just like you were nagged to do all those times as a child. In each of these scenarios, you are applying scientific reasoning, probably without realising it.

In each case, you subconsciously construct a null hypothesis, null because it initially doubts the proposition, “He’s well fit!”, “This is a bargain”, “It’s safe to cross now”. The null hypotheses would be, “No, he’s not”, “It’s too expensive” and “Don’t cross yet”. To test each we need evidence to compare to our standard measure, proof that this doubt is unfounded. Asking your friend to show you the photo, asking the stall-holder for a closer look, and using those eyes and ears to search for traffic, begins a feedback of information that your brain processes, making comparisons with your preconceptions. In life, we build up a picture of the world through experience, revising our standards each time we collect updates.

The standard for attractiveness is our estimate for average looks, so it’s a continuous, ongoing experiment that begins in puberty and uses our peer groups to reinforce our assessments. We consciously acknowledge some characteristics within these groups, accompanied by giggles and sniggers, and those might be the features that we might relate when asked what we look for in a partner. Subconsciously, we also keep records of characteristics of which we might not knowingly be taking note. Our assessment is also informed by our genetics and upbringing. For example, being able to detect hormones of a potential mate who differs in their genetic makeup minimises inbreeding, and helps ensure better immunity in our children. There are very many other complicated influences on mate choice, but one overriding trend is that daughters always tend to marry men who look like their father, perhaps surprisingly, even if he is an adoptive parent.

Our other two examples are equally complex, suffice to say that amongst other influences on us, our estimates of monetary value will be constantly updated by what we find aesthetically pleasing and changes to our tastes in art and culture, recent and upcoming expenditure and our current financial status. If an object’s value is in the same order of magnitude as our earnings, it’s purchase is going to impact domestic finances more than a much cheaper item. Put another way, we’re going to take more care choosing a new car than a tin of beans.

Regarding cars and the crossing-the-road example, bumping into things as a toddler and collapsing onto a padded bottom, perhaps wasn’t a particularly auspicious beginning to standing on our own two feet, although older relatives probably thought it had great comedy value. Those mishaps get less frequent as we grow older and stronger, all the time learning about our physical environment, and practising our measurement of it. At first there are people who protect us from making mistakes, and then there comes a time when they think we are proficient enough to go it alone.

What has happened over time is an accumulation of data, that we can recall individually, “How much was that vase we saw last week?”, or in summary as averages and range, “Most days, about 250 pupils walk to school, plus-or-minus 50 or so, depending on whether it’s raining or sunny”. A parent feels it’s okay to allow their children more freedom when the child’s behaviour indicates their understanding of the world has converged close enough to the grown ups’ for them to be safe. 

In crossing a road, we have learned that a car travelling towards us is likely to take a certain amount of time before it is too dangerous to step out in front of it. We construct our null hypothesis based on our stored data, average and range, and test it against this instance of the oncoming vehicle. If the car appears to be travelling at that average speed or slower, we reject the null hypothesis and step out to cross the road. If it’s travelling faster, let’s wait a while: null hypothesis accepted. Alternatively, if our estimates are wrong in some way, let’s just hope we live to update them.

This repetitive learning consolidates knowledge by updating the stored value with repeats of the experiment. The result is a moving or running average that proceeds through space at the same pace as the acquisition of new data. A new piece of information comes along, and the average is updated in response to the proportion of influence that single data point has upon the whole data set. Early on, a single datum will exert more influence on the moving average. As data accumulates, the influence new data has in deflecting the moving average is proportional to the reciprocal of the total number of data points, so that the impact of adding new data decreases with the passage of time.

If an extreme value is added next, it’s effect is buffered by the data already contributing to the moving average. Movement in the moving average becomes dampened as the moving average settles upon a long-term value. This is the mechanism underlying the convergence mentioned above for our crossing-the-road example.

Science is forced to make compromises. It’s the bane of project leaders far and wide, having to tailor experimental design to fit restrictions in time and money. Without the luxury of time, the statistical basis for experimental design requires that experiments are repeated enough times so that an accurate estimation of the variation in results is possible. This procedure is called replication, where each repeat of the experiment under the exact same test conditions is called a replicate. Ideally you would have an equal number of control experiments which is essentially the situation before you’ve added something to change it. These can be very mundane, but that’s the idea, and it just needs to be a baseline against which you’re making a comparison: a composite photograph of an average person, an average price for a certain type of ornament, and an empty road. You might also carry out repeated sampling within each replicate and each control, but significantly, replication involves repeating the whole experiment. What you’re looking for here is to see if the results from the first time were a fluke, and whether any perturbances in conditions not being manipulated by you are causing any outcomes that you are seeing.

Let’s say you’ve designed a robot that learns using artificial intelligence. Now, you’re pretty stoked about this version and have high hopes for it. The way you’re going to test it is by seeing if it can learn how to safely cross the road. It can’t do much worse than the other versions which all nearly ended up mangled beneath the wheels of a truck, but you’ve tried something new and feel optimistic. Taking repeated measurements of the robot on just a random setting, where it crosses the road irrespective of an oncoming vehicle, you use this as your control. You repeat this for different road settings to vary the frequency of vehicles and the gaps within the flow of traffic. Predictably, there is a frequency at which about half of the random crossings are successful. Also expected, faster and slower flows produce worse and better performances, respectively. Now, you repeat all of the experiments with exactly the same settings, except this time, your robot has its brain switched on. You’ve created a paired data set, running from low to high density traffic, where each half is also a data set of repeated measurements, one for your robot activated and the other as it’s control. I think of it as two lines of squares, side-by-side. They look like ladders, and we’re interested in the squares formed by the rungs. The squares at the bottom have the data from the slowest, most spaced, lowest density traffic. The squares at the top have the data from the fastest, more crammed, highest density traffic. One ladder holds the control measurements, somewhere around the middle of which will be the box of half-successful random crossings. The other ladder has the active robot results. Importantly, each square holds a data set of repeated measurements. With a sigh of relief, we’re not going to go into the statistics any further, but if the road-crossing robot works as hoped, the appropriate statistical analysis will show that it really is better when it is using its brain. Amazingly, we are constantly resampling, storing and analysing our interaction with the world like this.

For those interested in the psychological theory underlying perception as discussed in this section, I’ve been drawing mainly on Constructivism as an extension of Cognitivism, and especially Assimilation and Accommodation, although simplistically so as not to get too distracted by the detail of learning mechanisms. The reality is obviously messier than one of these purely computational models might suggest, with mistakes being introduced through misinterpretation and misinformation, as well as doubts about how we process vision, plus a simple inability to solve some problems. Proponents and critics of this way of thinking about thought include some big thinkers, such as Turing, Paget, Hofstadter and Penrose, and there’s some further reading to explore these ideas provided below.

Despite the details of any particular version of perception, it is clear that we test our environment when carrying out everyday activities. We store that information and keep an updated summary of moving average and range. if we didn’t have a natural way to replicate our every day experimentation we wouldn’t accumulate knowledge about our surroundings, and assemble them to build up a comprehensive picture of our environment. We continue doing this all the way throughout life, but the most intense period is during childhood and into adolescence. Without this ability, we would be far less prepared to look after ourselves, which is why inexperience can get us into a whole lot of trouble.

 

PLEASE PLEDGE

 

Further Reading

  • Alan Turing (1937) On Computable Numbers With an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Ser. 2, Vol. 42.
  • Jean Paget (1952) The Psychology of Intelligence. Routledge Classics.
  • Douglas Hofstadter (1979). Gödel, Escher, Bach: An Eternal Golden Braid. Vintage Books.
  • Roger Penrose (1994) Shadows of the Mind: A Search for the Missing Science of Consciousness. Oxford UP.

Get updates via email

Join 311 other awesome people who subscribe to new posts on this blog.

Join in the conversation

Sign in to comment
Writing in progress
Publication date: TBC
102% funded
342 backers