The African Tale

Originally published in the Informanté newspaper on Thursday, 26 May, 2016.


Africa. It’s the continent of life. With fertile grounds and an abundance of wildlife, it is obvious that is has much potential. And 250 000 years ago, it birthed a new species that would change the face of the planet. A species known as homo sapiens. Distinct and clever, this species was the first that showed a remarkable capacity for not only adapting to its environment, but also for shaping it to its needs. Humans were born from Africa, and over the next hundreds of thousands of years, we would spread across the globe.

The first great civilization was born in Africa. The great and bountiful Egyptian empire rose on the banks of the Nile in north-east Africa, fed by the river and aided by mankind’s latest innovation – irrigation. For over 3000 years, it was the beacon of civilization. From great works of art such as the Great Spinx, to monuments to its rulers like the pyramids of Giza, Africa’s first great civilization shaped those that followed. The Great Library of Alexandria became the greatest worldwide repository of knowledge, and scholars from far and wide came to be enlightened by the accumulated wisdom scribed on the parchment in its halls.

It’s conquest by the Romans marked the start of a new age, and civilizations in Europe and Asia grew from its ashes. But while civilization shifted outwards, Africa remained with its bountiful resources. Unfortunately, it would not remain untouched forever. About 200 years ago, the reaches of human civilization started to reach back into Africa. European superpowers looked southwards, and desired Africa’s great riches. The Scramble for Africa was on. Africa was to be colonized.

Colonialism changed Africa in several ways and affected its development in such a way that it took us a while to rid us of its shackles. Colonialism first established borders not based on existing tribal and proto-state areas that existed, but based on what the colonial militaries could defend from one another. Tribes etc. were thus split into different countries, with different rulers. The Middle East to this day also suffers from this. More than that, since the colonial forces vastly outclassed the native ones, it taught the people that military might is what brings victory. Colonial governments were often appointed officials with wide executive powers, not elected, and this government suppressed and extinguished existing culture (while importing theirs via missionaries), showing that a strong executive government is the most effective form since it beat them. And the economies of the colonies were geared towards resource extraction, not local development, since that gave the most profit to the mother countries.

But Africa was not taking this lying down. Africans rose up, fighting for their independence. And when we Africans were fighting for independence, we emulated the colonial powers. Militant rebel groups formed with strong executive powers given to their leaders, financed by the clandestine sale of resources. And then, with the mother countries weakened in the aftermath of World War II, Africa finally regained independence. 

The rebel leaders became political figures, but still wanted to wield absolute executive power. The administrative governmental services that existed were set up for an extraction economy, and the nascent economy thus as well. The rebel forces became the national military. But they were trained in offense, not peacekeeping.

So even though most of these new countries started with lofty constitutions and ideals, this mix proved unfortunate. They had new leaders, but several of the tribes in the country were not theirs. Policies are implemented that favour the leader’s tribe on his executive decision, and the other tribes complained, and remained unhappy with their representation in these new governments. People took to the streets, and as taught by colonial powers, the military is called in to quell the uprising. A period of civil war was ushered in.

And during the cold war, one side was usually supported by the US, and the other by the Soviets, depending on their political leanings. This influx of external resources lengthened this period considerably. So up till the end of the 1980's, Africa's economic development languished as it was the neocolonial battleground of the Cold War.

But when the Cold War ended, both sides stopped fuelling the fire. And in most countries, either one side was wiped out, or with both sides out of resources, they made peace. The countries that made peace learned how to become democracies, mostly, with all the trial and error that implies. Those that didn't, dictatorships.

In the dictatorships, the countries remained as colonies, but now for the ruler. But in the democracies, the people's needs had to be met, for they are the voters. And there the inadequacies of an extraction economy became apparent. The people wanted hospitals, roads, houses, but the skills were never developed - education was limited to what was required. So education was invested in, while skills were imported to provide the rest, at a cost. And since we're selling resources raw, most of the economic benefits from it accrue to the buyers, since they can refine it. But Africa’s slowly gaining the skills we need - and it shows! The people in dictatorships become restless when they see how life is improving everywhere but for them, and slowly dictatorships get overthrown.

 
This change started 25 years ago. Which means Africa's economic development is slowly gaining ground. In 1980, average literacy was between 30% and 40%. It's now between 60% and 70%. Dictatorships fall ever more regularly, though the older generation leaders do sometime find it hard to let go of power. As our economies develop, we're tackling poverty.

And we have our first wave of new leaders taking over in peaceful transitions of power. In Namibia, we're on our third democratically elected president, President Geingob, and his Harambee Prosperity Plan is seeking to redress the legacy of colonialism. President Kagame of Rwanda is busy developing the country to the point of self-sufficiency, and the recently elected President John Magufuli of Tanzania is tackling inefficiencies in that country's government. 

Africa is slowly emerging from its slumber, and is building towards a new African renaissance. It's coming, and it might be sooner than you think.

Greater Than The Sum Of Its Parts

Originally published in the Informanté newspaper on Thursday, 19 May, 2016.


Over the past 100 years, the world has changed dramatically. 100 years ago, there were still 4 empires in Europe, the motor vehicle was only 30 years old, and the speed limit in most cities was 15 km/h. The word ‘teenager’ had not yet been invented, and women were not allowed to vote. Heroin was still prescribed by doctors, and the Eiffel Tower was the tallest structure in the world. 

Today, none of those statements are true. And as the world changed, its propensity to change accelerated as well. The first computer, ENIAC, was completed in 1946, and filled a room. By 1965, the integrated circuit was invented, and computers became the size of filing cabinets. Gordon Moore coined his law that computing power would double every 18 months, and it has kept pace since then. DARPA created a redundant system to network its computers together during the 1970, initially connecting 4 computers across the United States. By 1977, 100 computers were connected, and that ballooned to over 100 000 by 1989. By 2000, this network was called the internet, and had 100 million computers connected. 

100 years ago, the telephone was barely 40 years old. Transcontinental telephone lines were just installed, and worldwide communications were in its infancy. By 2003, the world had 1.263 billion telephones connecting people, but it had already been eclipsed. In 1973, Martin Cooper invented the mobile phone. By 1992, these devices could send text messages, and by 1996, they could access the internet.

Today, over 2 billion computers are operating across the globe, with 6.8 billion mobile devices connecting almost the entire human race together. The internet has an estimated 3.4 billion users currently, providing almost half of humanity access to the greatest repository of human knowledge ever assembled. The pace of change is increasing, and futurists conceive that in our lifetimes we may witness the technological singularity, where our creations surpass our human capabilities, and accelerate change even faster.

But humanity is nothing if not a product of our evolution. And to our ancestors, change represented one thing only. Danger. And danger breeds fear. We’re the descendants of those who noticed change and immediately began looking for a predator. True, this was quite often the case back then, but as we’ve progressed both socially and technologically, we’ve eliminated most of these dangers. Mankind is now the apex predator of this planet, with our life expectancy increasing by one year every three years. But still, we’ve retained our fear of change.

As change has accelerated over the past century, change has affected more and people, as these same people have become more and more connected. They are exposed to more and different experiences faster and faster, and we’ve not been able to keep our fear in check. We instinctively want to keep these changes at a distance, and retreat into our comfort zone. Into what we know, and that makes us feel safe. We retreat into what we perceive to be most like us and shun that which is different.

And so the world descends into xenophobia. Donald Trump rose to prominence in the United States on the back of promises to keep Mexicans and Muslims out. The United Kingdom Independence Party got widespread support while campaigning on anti-immigration issues, with similar far-right parties rising in Europe in the wake of the Syrian Civil War, while not recognizing that the same fear of change is fuelling the cause of that war, ISIS. Even in South Africa, xenophobic attacks occur against Nigerians. Everywhere, the refrain is the same. “We should stand together against the uncivilized hordes!” But as Jimmy Carr so memorably said, “They say there’s safety in numbers. Tell that to six million Jews!”

Retreating into homogeneity (as in, all the same) seems part of our cultural fear response. But nature has already taught us a powerful lesson in the dangers of homogeneity. No individual, or group, has only strengths and no weaknesses. In nature, this has led to mass extinctions whenever rapid environmental changes happen.  More than 99 percent of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct after their habitable environment changed so much that they were unable to adapt to survive. For an ecosystem to be resilient against rapid changes, it needs to be biodiverse, with lots of species that can fill the gap left by a single one should it go extinct due to whatever cause.

UNESCO has stated that, "cultural diversity is as necessary for humankind as biodiversity is for nature." Even in finance, the importance of diversification is well known. Harry Markowitz showed back in the 1950’s that a diversified portfolio provides the same return as the component shares, but the diversified portfolio had less risk than even the lowest risk component – the risk reduction was greater than the sum of the individual components! For his work in showing this, he not only created modern portfolio theory, but also won a Nobel Memorial Prize in economics for his efforts.




It comes down to the old proverb that says, “Don’t put all your eggs in one basket.” Just like different animals can provide different strengths to counteract other’s weaknesses in an ecosystem and make it resilient, and just as different shares volatility cancel each other out in a portfolio to reduce risk, so too do different people provide different strengths and viewpoints in their communities to counteract the weaknesses they have. 

The Bene Gesserit famously said, “Fear is the mind-killer. Fear is the little-death that brings total obliteration.” When we embrace fear instead of change, we pull back from a diverse cultural community, and not only weaken it, but also significantly weaken ourselves. People are all different, and when we fear, we can latch onto any difference to divide people into an ‘us’ and ‘them.’ And when we don’t have an external threat to fear, we tear our societies apart to do so. Political processes become factionalized. A united people splits apart at the seams. A rainbow nation that can no longer claim that title, because each individual colour now charts its own path. 

We’ve seen it happen in South Africa, and there are signs that it is rumbling here in Namibia as well. When we divide ourselves into an ‘us’ and ‘them’ we are segregating our society. Keeping ourselves apart. A new apartheid. Namibia was founded on the premise that the rights denied to ‘others’ due to apartheid would never be repeated, and we’ve prospered as a united nation, and a united people. So when fear of change wants to drive you to talk about ‘them’, remember the other part of the Bene Gesserit saying, “I will face my fear. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path. Where the fear has gone there will be nothing. Only I will remain.” Let’s embrace change and diversity, because we as a people will emerge stronger on the other side.

The Study Scientific

Originally published in the Informanté newspaper on Thursday, 12 May, 2016. 

With the prevalence of social media only increasing year by year, it is quickly becoming the primary news medium for many people. Informante, while primarily a print newspaper, has had to increase its social media footprint to maintain its reputation as the number one source of news for the Namibian populace, and as it now has in excess of 180 000 likes on Facebook, it can legitimately claim to be the news source that reaches the most Namibians.

But not all news from social media is served via the Informante – and a lot of it is simply shared by friends and family. And while I’ve expounded before of the importance of scientific literacy here in Theory of Interest, I’ve seen way too many stories shared where a ‘scientific study’ has indicated a surprising result, often a pleasant one for the person sharing, but when I dug a bit deeper the flaws in the reporting became evident. 

Unfortunately, while our education system is geared towards providing people with a basic level of scientific literacy, it does not provide people with the ability to evaluate a scientific study even well into the tertiary level. The subject in my course that has enabled me to evaluate studies was an honours-level subject, and most courses don’t even have it due to it requiring a basic understanding of statistics – an artefact, perhaps, of a misplaced cultural fear of mathematics. 

Yet the basic skills required to just evaluate, if not necessarily understanding a study, is not much. It is important to be able to sort fact from fancy, as a short excerpt from Edward Tufte’s book “Data Analysis for Politics and Policy” starts to demonstrate:

“One day when I was a junior medical student, a very important Boston surgeon visited the school and delivered a great treatise on a large number of patients who had undergone successful operations for vascular reconstruction. At the end of the lecture, a young student at the back of the room timidly asked, “Do you have any controls?” Well, the great surgeon drew himself up to his full height, hit the desk, and said, “Do you mean did I not operate on half of the patients?” The hall grew very quiet then. The voice at the back of the room very hesitantly replied, “Yes, that’s what I had in mind.” Then the visitor’s fist really came down as he thundered, “Of course not. That would have doomed half of them to their death.” It was quiet then, and one could scarcely hear the small voice ask, “Which half?”” 

That humorous story highlights the importance of a key part of the scientific method – scientific control groups. Science is tested by experimentation, and a study is usually performed on a sample of a population. If I can be permitted to explain with an example close to my heart, consider cardiac medication. If a new pill is developed that can extend the life of a cardiac patient, it does not establish the pill’s effectiveness if the whole sample of the population that will be affected is given the pill, and fewer people die – after all, it could simply be that the selected sample lived longer due to a common external factor.

Instead, two samples of the population are selected, and one is treated with the new pill, while the other group, the control group, is treated with either existing medication (if they wish to compare its efficacy against existing medication) or a placebo (most commonly a sugar pill). Where possible, these studies are performed blind to prevent the placebo effect, where individuals can feel better simply because they are told they’re receiving treatment. The opposite effect, the nocebo effect, can also occur where people exhibit symptoms due to perceived effects – WiFi sensitivity is an example of the nocebo effect.

Ideally, a good study would be a double-blind study – double blind in that neither the patient nor those administering the treatment knows which group is receiving treatment and which is receiving the placebo – although in some cases this is not possible. This serves to eliminate bias, either intentional or unintentional (a doctor revealing placebo treatment either by an accidentally revealing it, or via body language) and it aims to keep the testing objective. 

Thus simply be reading the original study, you can already evaluate how scientific it is simply by checking the design of study presented. But there is another measure you should also be checking that’s been mentioned already – sample size. This is where a bit of basic statistics becomes necessary – but luckily it is not complex. 

Studies test whether results are statistically significant. This is usually expressed as a p-value (with p being probability) and it is generally known as a confidence level. The two most common levels used are p < 0.05 (or testing that we’re 95% confident this effect is not due to random chance) and p < 0.01 (or testing that we’re 99% confident this effect is not due to random chance). 


Next is our confidence interval, or margin of error. This is commonly expressed in results as, for example with our cardiac pill, as that its effect on reducing death in patients was 65% (95 percent confidence interval, 39 to 80 percent; P<0.001). As you can see, when the p value is increased in confidence (now 99.9%), the error margin increases (41% interval - 39% to 80%) with the same sample size. With the usual 95% and 99% confidence level, the margin is usually between 1% and 10%.

Armed with these two values, it is possible to calculate the size of the sample of the population you need to have a statistically significant sample – but that is usually the mathematics that put people off. Luckily, sample sizes do not increase linearly with the population size, and it is possible to use precalculated values to see if the sample size of a study is large enough.

So, for a population of 10 000, at a 95% confidence level, you need a sample size of 370 people for a 5% margin of error, 1332 people for a 2.5% margin of error and 4899 for a 1% margin of error. For 99% confidence, this increases to 622 people, 2098 people and 6239 people respectively. But for a population of 1 000 000, at 95% confidence, you need a sample size of 384 people for 5%, 1534 people for a 2.5% error margin, and 9512 people for 1%.

As you can see, sample sizes tend toward a certain upper limit, and you can develop a rule of thumb. For a study to claim it affects everyone, then, it needs at least 384 participants to be 95% confident about its results, with a 5% margin of error, moving up to 16 500 people to be 99% confident with a 1% margin of error. 

When you thus see a scientific study shared on Facebook that claims that “A Glass Of Red Wine Is The Equivalent To An Hour At The Gym,” read more closely. You’ll see the study was conducted on rats, which means it does not necessarily apply to humans, and that the study merely found that one compound in wine mimicked the effects of endurance similar to exercise training. Similarly, a story that claimed “Study Says Beer Helps You Lose Weight,” was conducted using one compound in beer, and conducted on mice. 

Proper scientific literacy is essential to be an informed citizen. Take the time and read a few scientific studies before sharing them. See if they’ve followed procedure, and had a control group. Examine the sample size, and take care when the study was conducted on as few as 20 people. 

Don’t just believe a study because it claims something you want to be true. After all, as John Oliver said on his show this Sunday, “In science, you don't just get to cherry-pick the parts that justify what you were going to do anyway. That's religion. You're thinking of religion.”